【哈工大版】Dynamic ReLU:自适应参数化ReLU及Keras代码(调参记录3)

2020-05-27 14:32:07 浏览数 (1)

本文介绍哈工大团队提出的一种动态ReLU(Dynamic ReLU)激活函数,即自适应参数化ReLU激活函数,原本是应用在基于一维振动信号的故障诊断,能够让每个样本有自己独特的ReLU参数,在2019年5月3日投稿至IEEE Transactions on Industrial Electronics,2020年1月24日录用,2020年2月13日在IEEE官网公布

续上一篇:

一种Dynamic ReLU:自适应参数化ReLU激活函数(调参记录2)

本文继续测试深度残差网络 自适应参数化ReLU激活函数在Cifar10图像集上的准确率,残差模块仍然设置成27个,卷积核的个数分别增加至16个、32个和64个,迭代次数从1000个epoch减到了500个epoch(主要是为了省时间)。

自适应参数化ReLU是Parametric ReLU的升级版本,是一种动态化ReLU激活函数,如下图所示:

自适应参数化ReLU:一种Dynamic ReLU(动态ReLU)激活函数自适应参数化ReLU:一种Dynamic ReLU(动态ReLU)激活函数

Keras代码如下:

代码语言:python代码运行次数:0复制
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 14 04:17:45 2020
Implemented using TensorFlow 1.0.1 and Keras 2.2.1

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht,
Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, 
IEEE Transactions on Industrial Electronics, DOI: 10.1109/TIE.2020.2972458,
Date of Publication: 13 February 2020

@author: Minghang Zhao
"""

from __future__ import print_function
import keras
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Conv2D, BatchNormalization, Activation, Minimum
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D, Concatenate, Reshape
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler
K.set_learning_phase(1)

# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Noised data
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_test = x_test-np.mean(x_train)
x_train = x_train-np.mean(x_train)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

# Schedule the learning rate, multiply 0.1 every 200 epoches
def scheduler(epoch):
    if epoch % 200 == 0 and epoch != 0:
        lr = K.get_value(model.optimizer.lr)
        K.set_value(model.optimizer.lr, lr * 0.1)
        print("lr changed to {}".format(lr * 0.1))
    return K.get_value(model.optimizer.lr)

# An adaptively parametric rectifier linear unit (APReLU)
def aprelu(inputs):
    # get the number of channels
    channels = inputs.get_shape().as_list()[-1]
    # get a zero feature map
    zeros_input = keras.layers.subtract([inputs, inputs])
    # get a feature map with only positive features
    pos_input = Activation('relu')(inputs)
    # get a feature map with only negative features
    neg_input = Minimum()([inputs,zeros_input])
    # define a network to obtain the scaling coefficients
    scales_p = GlobalAveragePooling2D()(pos_input)
    scales_n = GlobalAveragePooling2D()(neg_input)
    scales = Concatenate()([scales_n, scales_p])
    scales = Dense(channels//4, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization()(scales)
    scales = Activation('relu')(scales)
    scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization()(scales)
    scales = Activation('sigmoid')(scales)
    scales = Reshape((1,1,channels))(scales)
    # apply a paramtetric relu
    neg_part = keras.layers.multiply([scales, neg_input])
    return keras.layers.add([pos_input, neg_part])

# Residual Block
def residual_block(incoming, nb_blocks, out_channels, downsample=False,
                   downsample_strides=2):
    
    residual = incoming
    in_channels = incoming.get_shape().as_list()[-1]
    
    for i in range(nb_blocks):
        
        identity = residual
        
        if not downsample:
            downsample_strides = 1
        
        residual = BatchNormalization()(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
                          padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        residual = BatchNormalization()(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        # Downsampling
        if downsample_strides > 1:
            identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
            
        # Zero_padding to match channels
        if in_channels != out_channels:
            zeros_identity = keras.layers.subtract([identity, identity])
            identity = keras.layers.concatenate([identity, zeros_identity])
            in_channels = out_channels
        
        residual = keras.layers.add([residual, identity])
    
    return residual


# define and train a model
inputs = Input(shape=(32, 32, 3))
net = Conv2D(16, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_block(net, 9, 16, downsample=False)
net = residual_block(net, 1, 32, downsample=True)
net = residual_block(net, 8, 32, downsample=False)
net = residual_block(net, 1, 64, downsample=True)
net = residual_block(net, 8, 64, downsample=False)
net = BatchNormalization()(net)
net = aprelu(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
sgd = optimizers.SGD(lr=0.1, decay=0., momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

# data augmentation
datagen = ImageDataGenerator(
    # randomly rotate images in the range (deg 0 to 180)
    rotation_range=30,
    # randomly flip images
    horizontal_flip=True,
    # randomly shift images horizontally
    width_shift_range=0.125,
    # randomly shift images vertically
    height_shift_range=0.125)

reduce_lr = LearningRateScheduler(scheduler)
# fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train, batch_size=100),
                    validation_data=(x_test, y_test), epochs=500, 
                    verbose=1, callbacks=[reduce_lr], workers=4)

# get results
K.set_learning_phase(0)
DRSN_train_score1 = model.evaluate(x_train, y_train, batch_size=100, verbose=0)
print('Train loss:', DRSN_train_score1[0])
print('Train accuracy:', DRSN_train_score1[1])
DRSN_test_score1 = model.evaluate(x_test, y_test, batch_size=100, verbose=0)
print('Test loss:', DRSN_test_score1[0])
print('Test accuracy:', DRSN_test_score1[1])

实验结果如下:

代码语言:python代码运行次数:0复制
Using TensorFlow backend.
x_train shape: (50000, 32, 32, 3)
50000 train samples
10000 test samples
Epoch 1/500
91s 181ms/step - loss: 2.4539 - acc: 0.4100 - val_loss: 2.0730 - val_acc: 0.5339
Epoch 2/500
63s 126ms/step - loss: 1.9860 - acc: 0.5463 - val_loss: 1.7375 - val_acc: 0.6207
Epoch 3/500
63s 126ms/step - loss: 1.7263 - acc: 0.6070 - val_loss: 1.5633 - val_acc: 0.6542
Epoch 4/500
63s 126ms/step - loss: 1.5410 - acc: 0.6480 - val_loss: 1.4049 - val_acc: 0.6839
Epoch 5/500
63s 126ms/step - loss: 1.4072 - acc: 0.6701 - val_loss: 1.3024 - val_acc: 0.7038
Epoch 6/500
63s 126ms/step - loss: 1.2918 - acc: 0.6950 - val_loss: 1.1935 - val_acc: 0.7256
Epoch 7/500
63s 126ms/step - loss: 1.1959 - acc: 0.7151 - val_loss: 1.0884 - val_acc: 0.7488
Epoch 8/500
63s 126ms/step - loss: 1.1186 - acc: 0.7316 - val_loss: 1.0709 - val_acc: 0.7462
Epoch 9/500
63s 126ms/step - loss: 1.0602 - acc: 0.7459 - val_loss: 0.9674 - val_acc: 0.7760
Epoch 10/500
63s 126ms/step - loss: 1.0074 - acc: 0.7569 - val_loss: 0.9300 - val_acc: 0.7801
Epoch 11/500
63s 126ms/step - loss: 0.9667 - acc: 0.7662 - val_loss: 0.9094 - val_acc: 0.7894
Epoch 12/500
64s 127ms/step - loss: 0.9406 - acc: 0.7689 - val_loss: 0.8765 - val_acc: 0.7899
Epoch 13/500
63s 127ms/step - loss: 0.9083 - acc: 0.7775 - val_loss: 0.8589 - val_acc: 0.7949
Epoch 14/500
63s 127ms/step - loss: 0.8872 - acc: 0.7832 - val_loss: 0.8389 - val_acc: 0.7997
Epoch 15/500
63s 127ms/step - loss: 0.8653 - acc: 0.7877 - val_loss: 0.8390 - val_acc: 0.7990
Epoch 16/500
63s 126ms/step - loss: 0.8529 - acc: 0.7901 - val_loss: 0.8052 - val_acc: 0.8061
Epoch 17/500
63s 126ms/step - loss: 0.8347 - acc: 0.7964 - val_loss: 0.8033 - val_acc: 0.8101
Epoch 18/500
63s 126ms/step - loss: 0.8186 - acc: 0.8014 - val_loss: 0.7835 - val_acc: 0.8171
Epoch 19/500
63s 126ms/step - loss: 0.8080 - acc: 0.8026 - val_loss: 0.7852 - val_acc: 0.8172
Epoch 20/500
63s 126ms/step - loss: 0.7982 - acc: 0.8070 - val_loss: 0.7596 - val_acc: 0.8249
Epoch 21/500
63s 126ms/step - loss: 0.7932 - acc: 0.8079 - val_loss: 0.7477 - val_acc: 0.8266
Epoch 22/500
63s 126ms/step - loss: 0.7862 - acc: 0.8106 - val_loss: 0.7489 - val_acc: 0.8285
Epoch 23/500
63s 126ms/step - loss: 0.7762 - acc: 0.8145 - val_loss: 0.7451 - val_acc: 0.8301
Epoch 24/500
63s 126ms/step - loss: 0.7691 - acc: 0.8174 - val_loss: 0.7402 - val_acc: 0.8271
Epoch 25/500
63s 126ms/step - loss: 0.7651 - acc: 0.8207 - val_loss: 0.7442 - val_acc: 0.8316
Epoch 26/500
63s 126ms/step - loss: 0.7562 - acc: 0.8218 - val_loss: 0.7177 - val_acc: 0.8392
Epoch 27/500
63s 126ms/step - loss: 0.7521 - acc: 0.8241 - val_loss: 0.7243 - val_acc: 0.8356
Epoch 28/500
63s 126ms/step - loss: 0.7436 - acc: 0.8254 - val_loss: 0.7505 - val_acc: 0.8289
Epoch 29/500
63s 126ms/step - loss: 0.7429 - acc: 0.8265 - val_loss: 0.7424 - val_acc: 0.8292
Epoch 30/500
63s 126ms/step - loss: 0.7391 - acc: 0.8313 - val_loss: 0.7185 - val_acc: 0.8392
Epoch 31/500
63s 126ms/step - loss: 0.7361 - acc: 0.8323 - val_loss: 0.7276 - val_acc: 0.8406
Epoch 32/500
63s 126ms/step - loss: 0.7311 - acc: 0.8343 - val_loss: 0.7167 - val_acc: 0.8405
Epoch 33/500
63s 126ms/step - loss: 0.7247 - acc: 0.8346 - val_loss: 0.7345 - val_acc: 0.8382
Epoch 34/500
63s 126ms/step - loss: 0.7196 - acc: 0.8378 - val_loss: 0.7058 - val_acc: 0.8481
Epoch 35/500
63s 126ms/step - loss: 0.7132 - acc: 0.8400 - val_loss: 0.7212 - val_acc: 0.8457
Epoch 36/500
63s 126ms/step - loss: 0.7112 - acc: 0.8436 - val_loss: 0.7031 - val_acc: 0.8496
Epoch 37/500
63s 126ms/step - loss: 0.7101 - acc: 0.8429 - val_loss: 0.7199 - val_acc: 0.8421
Epoch 38/500
63s 126ms/step - loss: 0.7093 - acc: 0.8439 - val_loss: 0.6786 - val_acc: 0.8550
Epoch 39/500
63s 126ms/step - loss: 0.7026 - acc: 0.8453 - val_loss: 0.7023 - val_acc: 0.8474
Epoch 40/500
63s 126ms/step - loss: 0.6992 - acc: 0.8470 - val_loss: 0.6993 - val_acc: 0.8491
Epoch 41/500
63s 126ms/step - loss: 0.6955 - acc: 0.8485 - val_loss: 0.7176 - val_acc: 0.8447
Epoch 42/500
63s 127ms/step - loss: 0.6987 - acc: 0.8471 - val_loss: 0.7265 - val_acc: 0.8433
Epoch 43/500
63s 126ms/step - loss: 0.6953 - acc: 0.8504 - val_loss: 0.6921 - val_acc: 0.8523
Epoch 44/500
63s 126ms/step - loss: 0.6875 - acc: 0.8522 - val_loss: 0.6824 - val_acc: 0.8584
Epoch 45/500
63s 126ms/step - loss: 0.6888 - acc: 0.8518 - val_loss: 0.6953 - val_acc: 0.8534
Epoch 46/500
63s 126ms/step - loss: 0.6816 - acc: 0.8538 - val_loss: 0.7102 - val_acc: 0.8492
Epoch 47/500
63s 126ms/step - loss: 0.6857 - acc: 0.8545 - val_loss: 0.6985 - val_acc: 0.8504
Epoch 48/500
63s 126ms/step - loss: 0.6835 - acc: 0.8533 - val_loss: 0.6992 - val_acc: 0.8540
Epoch 49/500
63s 126ms/step - loss: 0.6775 - acc: 0.8568 - val_loss: 0.6907 - val_acc: 0.8543
Epoch 50/500
63s 126ms/step - loss: 0.6782 - acc: 0.8554 - val_loss: 0.7010 - val_acc: 0.8504
Epoch 51/500
63s 126ms/step - loss: 0.6756 - acc: 0.8561 - val_loss: 0.6905 - val_acc: 0.8544
Epoch 52/500
63s 126ms/step - loss: 0.6730 - acc: 0.8581 - val_loss: 0.6838 - val_acc: 0.8568
Epoch 53/500
63s 126ms/step - loss: 0.6681 - acc: 0.8595 - val_loss: 0.6835 - val_acc: 0.8578
Epoch 54/500
63s 126ms/step - loss: 0.6691 - acc: 0.8593 - val_loss: 0.6691 - val_acc: 0.8647
Epoch 55/500
63s 126ms/step - loss: 0.6637 - acc: 0.8627 - val_loss: 0.6778 - val_acc: 0.8580
Epoch 56/500
63s 126ms/step - loss: 0.6661 - acc: 0.8620 - val_loss: 0.6654 - val_acc: 0.8639
Epoch 57/500
63s 126ms/step - loss: 0.6623 - acc: 0.8618 - val_loss: 0.6829 - val_acc: 0.8580
Epoch 58/500
64s 127ms/step - loss: 0.6626 - acc: 0.8636 - val_loss: 0.6701 - val_acc: 0.8610
Epoch 59/500
64s 127ms/step - loss: 0.6584 - acc: 0.8625 - val_loss: 0.6879 - val_acc: 0.8538
Epoch 60/500
63s 127ms/step - loss: 0.6530 - acc: 0.8653 - val_loss: 0.6670 - val_acc: 0.8641
Epoch 61/500
64s 127ms/step - loss: 0.6563 - acc: 0.8655 - val_loss: 0.6671 - val_acc: 0.8639
Epoch 62/500
64s 127ms/step - loss: 0.6543 - acc: 0.8656 - val_loss: 0.6792 - val_acc: 0.8620
Epoch 63/500
63s 127ms/step - loss: 0.6549 - acc: 0.8653 - val_loss: 0.6826 - val_acc: 0.8581
Epoch 64/500
63s 127ms/step - loss: 0.6477 - acc: 0.8696 - val_loss: 0.6842 - val_acc: 0.8599
Epoch 65/500
64s 127ms/step - loss: 0.6556 - acc: 0.8649 - val_loss: 0.6681 - val_acc: 0.8625
Epoch 66/500
63s 127ms/step - loss: 0.6463 - acc: 0.8690 - val_loss: 0.6611 - val_acc: 0.8673
Epoch 67/500
64s 127ms/step - loss: 0.6462 - acc: 0.8703 - val_loss: 0.6766 - val_acc: 0.8605
Epoch 68/500
64s 127ms/step - loss: 0.6420 - acc: 0.8705 - val_loss: 0.6551 - val_acc: 0.8687
Epoch 69/500
63s 127ms/step - loss: 0.6353 - acc: 0.8737 - val_loss: 0.6761 - val_acc: 0.8635
Epoch 70/500
64s 127ms/step - loss: 0.6473 - acc: 0.8699 - val_loss: 0.6616 - val_acc: 0.8684
Epoch 71/500
63s 127ms/step - loss: 0.6335 - acc: 0.8743 - val_loss: 0.6712 - val_acc: 0.8656
Epoch 72/500
63s 127ms/step - loss: 0.6325 - acc: 0.8738 - val_loss: 0.6801 - val_acc: 0.8604
Epoch 73/500
64s 127ms/step - loss: 0.6378 - acc: 0.8719 - val_loss: 0.6607 - val_acc: 0.8678
Epoch 74/500
64s 127ms/step - loss: 0.6355 - acc: 0.8743 - val_loss: 0.6568 - val_acc: 0.8671
Epoch 75/500
63s 127ms/step - loss: 0.6344 - acc: 0.8744 - val_loss: 0.6646 - val_acc: 0.8646
Epoch 76/500
64s 127ms/step - loss: 0.6283 - acc: 0.8745 - val_loss: 0.6571 - val_acc: 0.8703
Epoch 77/500
64s 127ms/step - loss: 0.6291 - acc: 0.8763 - val_loss: 0.6789 - val_acc: 0.8638
Epoch 78/500
63s 127ms/step - loss: 0.6291 - acc: 0.8781 - val_loss: 0.6485 - val_acc: 0.8708
Epoch 79/500
64s 127ms/step - loss: 0.6285 - acc: 0.8779 - val_loss: 0.6366 - val_acc: 0.8758
Epoch 80/500
64s 127ms/step - loss: 0.6310 - acc: 0.8755 - val_loss: 0.6587 - val_acc: 0.8710
Epoch 81/500
63s 127ms/step - loss: 0.6265 - acc: 0.8770 - val_loss: 0.6511 - val_acc: 0.8685
Epoch 82/500
63s 126ms/step - loss: 0.6246 - acc: 0.8784 - val_loss: 0.6405 - val_acc: 0.8742
Epoch 83/500
63s 126ms/step - loss: 0.6283 - acc: 0.8772 - val_loss: 0.6565 - val_acc: 0.8701
Epoch 84/500
63s 126ms/step - loss: 0.6225 - acc: 0.8778 - val_loss: 0.6565 - val_acc: 0.8731
Epoch 85/500
63s 126ms/step - loss: 0.6185 - acc: 0.8810 - val_loss: 0.6819 - val_acc: 0.8586
Epoch 86/500
63s 126ms/step - loss: 0.6241 - acc: 0.8792 - val_loss: 0.6703 - val_acc: 0.8685
Epoch 87/500
63s 127ms/step - loss: 0.6194 - acc: 0.8811 - val_loss: 0.6514 - val_acc: 0.8705
Epoch 88/500
64s 127ms/step - loss: 0.6159 - acc: 0.8798 - val_loss: 0.6401 - val_acc: 0.8764
Epoch 89/500
64s 127ms/step - loss: 0.6196 - acc: 0.8794 - val_loss: 0.6436 - val_acc: 0.8739
Epoch 90/500
64s 127ms/step - loss: 0.6144 - acc: 0.8817 - val_loss: 0.6491 - val_acc: 0.8718
Epoch 91/500
63s 127ms/step - loss: 0.6180 - acc: 0.8813 - val_loss: 0.6449 - val_acc: 0.8758
Epoch 92/500
63s 127ms/step - loss: 0.6091 - acc: 0.8822 - val_loss: 0.6465 - val_acc: 0.8758
Epoch 93/500
64s 127ms/step - loss: 0.6172 - acc: 0.8825 - val_loss: 0.6414 - val_acc: 0.8754
Epoch 94/500
63s 127ms/step - loss: 0.6110 - acc: 0.8822 - val_loss: 0.6582 - val_acc: 0.8710
Epoch 95/500
63s 126ms/step - loss: 0.6170 - acc: 0.8820 - val_loss: 0.6572 - val_acc: 0.8704
Epoch 96/500
63s 126ms/step - loss: 0.6132 - acc: 0.8843 - val_loss: 0.6744 - val_acc: 0.8656
Epoch 97/500
63s 126ms/step - loss: 0.6127 - acc: 0.8824 - val_loss: 0.6296 - val_acc: 0.8795
Epoch 98/500
63s 126ms/step - loss: 0.6056 - acc: 0.8857 - val_loss: 0.6586 - val_acc: 0.8738
Epoch 99/500
63s 127ms/step - loss: 0.6131 - acc: 0.8831 - val_loss: 0.6579 - val_acc: 0.8719
Epoch 100/500
63s 127ms/step - loss: 0.6076 - acc: 0.8846 - val_loss: 0.6507 - val_acc: 0.8716
Epoch 101/500
63s 127ms/step - loss: 0.6082 - acc: 0.8849 - val_loss: 0.6661 - val_acc: 0.8717
Epoch 102/500
64s 127ms/step - loss: 0.6117 - acc: 0.8836 - val_loss: 0.6860 - val_acc: 0.8612
Epoch 103/500
63s 127ms/step - loss: 0.6068 - acc: 0.8861 - val_loss: 0.6470 - val_acc: 0.8776
Epoch 104/500
64s 127ms/step - loss: 0.6063 - acc: 0.8872 - val_loss: 0.6613 - val_acc: 0.8679
Epoch 105/500
64s 127ms/step - loss: 0.6042 - acc: 0.8844 - val_loss: 0.6494 - val_acc: 0.8781
Epoch 106/500
64s 127ms/step - loss: 0.6036 - acc: 0.8871 - val_loss: 0.6507 - val_acc: 0.8717
Epoch 107/500
63s 127ms/step - loss: 0.6039 - acc: 0.8859 - val_loss: 0.6332 - val_acc: 0.8822
Epoch 108/500
63s 127ms/step - loss: 0.6054 - acc: 0.8865 - val_loss: 0.6511 - val_acc: 0.8737
Epoch 109/500
63s 127ms/step - loss: 0.6038 - acc: 0.8864 - val_loss: 0.6591 - val_acc: 0.8708
Epoch 110/500
63s 127ms/step - loss: 0.5994 - acc: 0.8888 - val_loss: 0.6289 - val_acc: 0.8843
Epoch 111/500
63s 127ms/step - loss: 0.5970 - acc: 0.8882 - val_loss: 0.6455 - val_acc: 0.8778
Epoch 112/500
63s 127ms/step - loss: 0.5990 - acc: 0.8878 - val_loss: 0.6369 - val_acc: 0.8788
Epoch 113/500
64s 127ms/step - loss: 0.6001 - acc: 0.8880 - val_loss: 0.6324 - val_acc: 0.8834
Epoch 114/500
63s 127ms/step - loss: 0.5944 - acc: 0.8893 - val_loss: 0.6233 - val_acc: 0.8844
Epoch 115/500
63s 127ms/step - loss: 0.5906 - acc: 0.8915 - val_loss: 0.6327 - val_acc: 0.8781
Epoch 116/500
63s 127ms/step - loss: 0.6013 - acc: 0.8870 - val_loss: 0.6265 - val_acc: 0.8827
Epoch 117/500
63s 127ms/step - loss: 0.5928 - acc: 0.8915 - val_loss: 0.6423 - val_acc: 0.8766
Epoch 118/500
63s 127ms/step - loss: 0.5988 - acc: 0.8878 - val_loss: 0.6609 - val_acc: 0.8695
Epoch 119/500
64s 127ms/step - loss: 0.5920 - acc: 0.8909 - val_loss: 0.6242 - val_acc: 0.8846
Epoch 120/500
64s 127ms/step - loss: 0.5941 - acc: 0.8894 - val_loss: 0.6528 - val_acc: 0.8716
Epoch 121/500
63s 126ms/step - loss: 0.5939 - acc: 0.8895 - val_loss: 0.6338 - val_acc: 0.8806
Epoch 122/500
63s 126ms/step - loss: 0.5936 - acc: 0.8900 - val_loss: 0.6290 - val_acc: 0.8827
Epoch 123/500
63s 126ms/step - loss: 0.5937 - acc: 0.8891 - val_loss: 0.6471 - val_acc: 0.8693
Epoch 124/500
63s 126ms/step - loss: 0.5900 - acc: 0.8902 - val_loss: 0.6098 - val_acc: 0.8911
Epoch 125/500
63s 126ms/step - loss: 0.5854 - acc: 0.8933 - val_loss: 0.6445 - val_acc: 0.8757
Epoch 126/500
63s 126ms/step - loss: 0.5913 - acc: 0.8898 - val_loss: 0.6354 - val_acc: 0.8824
Epoch 127/500
63s 126ms/step - loss: 0.5927 - acc: 0.8893 - val_loss: 0.6420 - val_acc: 0.8843
Epoch 128/500
63s 126ms/step - loss: 0.5926 - acc: 0.8901 - val_loss: 0.6244 - val_acc: 0.8825
Epoch 129/500
63s 127ms/step - loss: 0.5879 - acc: 0.8906 - val_loss: 0.6230 - val_acc: 0.8849
Epoch 130/500
63s 126ms/step - loss: 0.5917 - acc: 0.8908 - val_loss: 0.6428 - val_acc: 0.8771
Epoch 131/500
63s 126ms/step - loss: 0.5861 - acc: 0.8920 - val_loss: 0.6582 - val_acc: 0.8761
Epoch 132/500
63s 126ms/step - loss: 0.5857 - acc: 0.8934 - val_loss: 0.6353 - val_acc: 0.8792
Epoch 133/500
64s 127ms/step - loss: 0.5868 - acc: 0.8926 - val_loss: 0.6154 - val_acc: 0.8878
Epoch 134/500
63s 126ms/step - loss: 0.5869 - acc: 0.8932 - val_loss: 0.6369 - val_acc: 0.8805
Epoch 135/500
63s 126ms/step - loss: 0.5853 - acc: 0.8934 - val_loss: 0.6133 - val_acc: 0.8832
Epoch 136/500
63s 126ms/step - loss: 0.5818 - acc: 0.8944 - val_loss: 0.6538 - val_acc: 0.8751
Epoch 137/500
63s 126ms/step - loss: 0.5801 - acc: 0.8937 - val_loss: 0.6478 - val_acc: 0.8733
Epoch 138/500
63s 127ms/step - loss: 0.5788 - acc: 0.8955 - val_loss: 0.6310 - val_acc: 0.8805
Epoch 139/500
63s 126ms/step - loss: 0.5828 - acc: 0.8926 - val_loss: 0.6172 - val_acc: 0.8869
Epoch 140/500
63s 126ms/step - loss: 0.5828 - acc: 0.8944 - val_loss: 0.6508 - val_acc: 0.8762
Epoch 141/500
63s 126ms/step - loss: 0.5856 - acc: 0.8934 - val_loss: 0.6242 - val_acc: 0.8797
Epoch 142/500
63s 126ms/step - loss: 0.5815 - acc: 0.8944 - val_loss: 0.6483 - val_acc: 0.8749
Epoch 143/500
63s 127ms/step - loss: 0.5807 - acc: 0.8964 - val_loss: 0.6374 - val_acc: 0.8789
Epoch 144/500
63s 126ms/step - loss: 0.5810 - acc: 0.8943 - val_loss: 0.6414 - val_acc: 0.8782
Epoch 145/500
63s 127ms/step - loss: 0.5807 - acc: 0.8959 - val_loss: 0.6279 - val_acc: 0.8783
Epoch 146/500
63s 126ms/step - loss: 0.5784 - acc: 0.8967 - val_loss: 0.6179 - val_acc: 0.8827
Epoch 147/500
63s 126ms/step - loss: 0.5754 - acc: 0.8948 - val_loss: 0.6358 - val_acc: 0.8791
Epoch 148/500
63s 126ms/step - loss: 0.5764 - acc: 0.8960 - val_loss: 0.6279 - val_acc: 0.8828
Epoch 149/500
63s 126ms/step - loss: 0.5749 - acc: 0.8965 - val_loss: 0.6513 - val_acc: 0.8770
Epoch 150/500
63s 126ms/step - loss: 0.5791 - acc: 0.8964 - val_loss: 0.6436 - val_acc: 0.8795
Epoch 151/500
63s 126ms/step - loss: 0.5786 - acc: 0.8959 - val_loss: 0.6276 - val_acc: 0.8807
Epoch 152/500
63s 127ms/step - loss: 0.5761 - acc: 0.8952 - val_loss: 0.6359 - val_acc: 0.8821
Epoch 153/500
63s 126ms/step - loss: 0.5729 - acc: 0.8967 - val_loss: 0.6416 - val_acc: 0.8779
Epoch 154/500
63s 126ms/step - loss: 0.5742 - acc: 0.8982 - val_loss: 0.6312 - val_acc: 0.8819
Epoch 155/500
63s 126ms/step - loss: 0.5750 - acc: 0.8973 - val_loss: 0.6173 - val_acc: 0.8856
Epoch 156/500
63s 126ms/step - loss: 0.5722 - acc: 0.8972 - val_loss: 0.6239 - val_acc: 0.8850
Epoch 157/500
63s 126ms/step - loss: 0.5760 - acc: 0.8963 - val_loss: 0.6322 - val_acc: 0.8807
Epoch 158/500
63s 126ms/step - loss: 0.5759 - acc: 0.8967 - val_loss: 0.6482 - val_acc: 0.8718
Epoch 159/500
63s 126ms/step - loss: 0.5696 - acc: 0.8991 - val_loss: 0.6134 - val_acc: 0.8857
Epoch 160/500
63s 127ms/step - loss: 0.5722 - acc: 0.8986 - val_loss: 0.6347 - val_acc: 0.8787
Epoch 161/500
64s 127ms/step - loss: 0.5712 - acc: 0.8986 - val_loss: 0.6508 - val_acc: 0.8753
Epoch 162/500
64s 127ms/step - loss: 0.5757 - acc: 0.8968 - val_loss: 0.6117 - val_acc: 0.8860
Epoch 163/500
64s 127ms/step - loss: 0.5679 - acc: 0.8992 - val_loss: 0.6201 - val_acc: 0.8843
Epoch 164/500
64s 127ms/step - loss: 0.5672 - acc: 0.9005 - val_loss: 0.6270 - val_acc: 0.8822
Epoch 165/500
63s 127ms/step - loss: 0.5703 - acc: 0.8994 - val_loss: 0.6234 - val_acc: 0.8832
Epoch 166/500
63s 127ms/step - loss: 0.5704 - acc: 0.8982 - val_loss: 0.6396 - val_acc: 0.8781
Epoch 167/500
63s 127ms/step - loss: 0.5731 - acc: 0.8973 - val_loss: 0.6287 - val_acc: 0.8836
Epoch 168/500
64s 127ms/step - loss: 0.5674 - acc: 0.8997 - val_loss: 0.6274 - val_acc: 0.8840
Epoch 169/500
63s 127ms/step - loss: 0.5710 - acc: 0.8963 - val_loss: 0.6319 - val_acc: 0.8833
Epoch 170/500
64s 127ms/step - loss: 0.5677 - acc: 0.8996 - val_loss: 0.6248 - val_acc: 0.8873
Epoch 171/500
63s 127ms/step - loss: 0.5713 - acc: 0.8987 - val_loss: 0.6324 - val_acc: 0.8819
Epoch 172/500
64s 127ms/step - loss: 0.5674 - acc: 0.9004 - val_loss: 0.6259 - val_acc: 0.8849
Epoch 173/500
63s 127ms/step - loss: 0.5743 - acc: 0.8967 - val_loss: 0.6394 - val_acc: 0.8796
Epoch 174/500
63s 127ms/step - loss: 0.5656 - acc: 0.8995 - val_loss: 0.6117 - val_acc: 0.8833
Epoch 175/500
63s 127ms/step - loss: 0.5643 - acc: 0.9009 - val_loss: 0.6178 - val_acc: 0.8855
Epoch 176/500
64s 127ms/step - loss: 0.5660 - acc: 0.9002 - val_loss: 0.6457 - val_acc: 0.8772
Epoch 177/500
63s 127ms/step - loss: 0.5715 - acc: 0.8991 - val_loss: 0.6284 - val_acc: 0.8854
Epoch 178/500
63s 126ms/step - loss: 0.5704 - acc: 0.9005 - val_loss: 0.6210 - val_acc: 0.8829
Epoch 179/500
63s 126ms/step - loss: 0.5669 - acc: 0.9010 - val_loss: 0.6091 - val_acc: 0.8868
Epoch 180/500
63s 126ms/step - loss: 0.5695 - acc: 0.8991 - val_loss: 0.6315 - val_acc: 0.8817
Epoch 181/500
63s 127ms/step - loss: 0.5679 - acc: 0.8981 - val_loss: 0.5973 - val_acc: 0.8885
Epoch 182/500
63s 127ms/step - loss: 0.5633 - acc: 0.9011 - val_loss: 0.6239 - val_acc: 0.8797
Epoch 183/500
64s 127ms/step - loss: 0.5621 - acc: 0.9014 - val_loss: 0.6133 - val_acc: 0.8911
Epoch 184/500
64s 127ms/step - loss: 0.5660 - acc: 0.9004 - val_loss: 0.6123 - val_acc: 0.8871
Epoch 185/500
63s 127ms/step - loss: 0.5676 - acc: 0.8983 - val_loss: 0.6330 - val_acc: 0.8801
Epoch 186/500
63s 127ms/step - loss: 0.5647 - acc: 0.9008 - val_loss: 0.6295 - val_acc: 0.8816
Epoch 187/500
63s 126ms/step - loss: 0.5637 - acc: 0.9005 - val_loss: 0.6291 - val_acc: 0.8801
Epoch 188/500
63s 127ms/step - loss: 0.5629 - acc: 0.9009 - val_loss: 0.6170 - val_acc: 0.8846
Epoch 189/500
64s 127ms/step - loss: 0.5616 - acc: 0.9013 - val_loss: 0.6206 - val_acc: 0.8827
Epoch 190/500
64s 127ms/step - loss: 0.5678 - acc: 0.8990 - val_loss: 0.6226 - val_acc: 0.8805
Epoch 191/500
63s 126ms/step - loss: 0.5613 - acc: 0.9008 - val_loss: 0.6092 - val_acc: 0.8865
Epoch 192/500
63s 127ms/step - loss: 0.5601 - acc: 0.9025 - val_loss: 0.6156 - val_acc: 0.8890
Epoch 193/500
63s 127ms/step - loss: 0.5608 - acc: 0.9018 - val_loss: 0.6255 - val_acc: 0.8846
Epoch 194/500
63s 126ms/step - loss: 0.5668 - acc: 0.8993 - val_loss: 0.6239 - val_acc: 0.8812
Epoch 195/500
63s 126ms/step - loss: 0.5576 - acc: 0.9034 - val_loss: 0.6230 - val_acc: 0.8844
Epoch 196/500
63s 126ms/step - loss: 0.5642 - acc: 0.9002 - val_loss: 0.6197 - val_acc: 0.8853
Epoch 197/500
63s 126ms/step - loss: 0.5651 - acc: 0.8991 - val_loss: 0.6171 - val_acc: 0.8885
Epoch 198/500
63s 126ms/step - loss: 0.5602 - acc: 0.9028 - val_loss: 0.6147 - val_acc: 0.8872
Epoch 199/500
63s 126ms/step - loss: 0.5635 - acc: 0.9023 - val_loss: 0.6115 - val_acc: 0.8878
Epoch 200/500
63s 126ms/step - loss: 0.5618 - acc: 0.9015 - val_loss: 0.6213 - val_acc: 0.8853
Epoch 201/500
lr changed to 0.010000000149011612
63s 127ms/step - loss: 0.4599 - acc: 0.9378 - val_loss: 0.5280 - val_acc: 0.9159
Epoch 202/500
63s 126ms/step - loss: 0.4110 - acc: 0.9526 - val_loss: 0.5197 - val_acc: 0.9206
Epoch 203/500
63s 127ms/step - loss: 0.3926 - acc: 0.9573 - val_loss: 0.5123 - val_acc: 0.9200
Epoch 204/500
63s 127ms/step - loss: 0.3759 - acc: 0.9617 - val_loss: 0.5096 - val_acc: 0.9201
Epoch 205/500
63s 126ms/step - loss: 0.3625 - acc: 0.9633 - val_loss: 0.5113 - val_acc: 0.9201
Epoch 206/500
63s 126ms/step - loss: 0.3524 - acc: 0.9660 - val_loss: 0.5002 - val_acc: 0.9227
Epoch 207/500
63s 126ms/step - loss: 0.3444 - acc: 0.9675 - val_loss: 0.5007 - val_acc: 0.9229
Epoch 208/500
63s 126ms/step - loss: 0.3388 - acc: 0.9678 - val_loss: 0.4948 - val_acc: 0.9221
Epoch 209/500
63s 127ms/step - loss: 0.3282 - acc: 0.9700 - val_loss: 0.4957 - val_acc: 0.9231
Epoch 210/500
63s 126ms/step - loss: 0.3192 - acc: 0.9722 - val_loss: 0.4946 - val_acc: 0.9216
Epoch 211/500
63s 126ms/step - loss: 0.3153 - acc: 0.9713 - val_loss: 0.4878 - val_acc: 0.9205
Epoch 212/500
63s 126ms/step - loss: 0.3066 - acc: 0.9731 - val_loss: 0.4880 - val_acc: 0.9222
Epoch 213/500
63s 126ms/step - loss: 0.2996 - acc: 0.9739 - val_loss: 0.4867 - val_acc: 0.9219
Epoch 214/500
63s 126ms/step - loss: 0.2968 - acc: 0.9750 - val_loss: 0.4878 - val_acc: 0.9208
Epoch 215/500
63s 126ms/step - loss: 0.2880 - acc: 0.9757 - val_loss: 0.4854 - val_acc: 0.9226
Epoch 216/500
64s 127ms/step - loss: 0.2832 - acc: 0.9755 - val_loss: 0.4865 - val_acc: 0.9207
Epoch 217/500
63s 127ms/step - loss: 0.2759 - acc: 0.9780 - val_loss: 0.4830 - val_acc: 0.9209
Epoch 218/500
63s 127ms/step - loss: 0.2751 - acc: 0.9766 - val_loss: 0.4798 - val_acc: 0.9231
Epoch 219/500
63s 127ms/step - loss: 0.2701 - acc: 0.9775 - val_loss: 0.4781 - val_acc: 0.9228
Epoch 220/500
64s 127ms/step - loss: 0.2676 - acc: 0.9767 - val_loss: 0.4748 - val_acc: 0.9217
Epoch 221/500
64s 127ms/step - loss: 0.2580 - acc: 0.9790 - val_loss: 0.4820 - val_acc: 0.9205
Epoch 222/500
64s 127ms/step - loss: 0.2552 - acc: 0.9793 - val_loss: 0.4761 - val_acc: 0.9210
Epoch 223/500
63s 127ms/step - loss: 0.2510 - acc: 0.9797 - val_loss: 0.4766 - val_acc: 0.9215
Epoch 224/500
63s 126ms/step - loss: 0.2500 - acc: 0.9791 - val_loss: 0.4754 - val_acc: 0.9184
Epoch 225/500
63s 127ms/step - loss: 0.2453 - acc: 0.9793 - val_loss: 0.4659 - val_acc: 0.9233
Epoch 226/500
63s 126ms/step - loss: 0.2424 - acc: 0.9795 - val_loss: 0.4714 - val_acc: 0.9227
Epoch 227/500
63s 127ms/step - loss: 0.2367 - acc: 0.9804 - val_loss: 0.4790 - val_acc: 0.9169
Epoch 228/500
63s 126ms/step - loss: 0.2381 - acc: 0.9791 - val_loss: 0.4642 - val_acc: 0.9222
Epoch 229/500
63s 126ms/step - loss: 0.2304 - acc: 0.9814 - val_loss: 0.4627 - val_acc: 0.9201
Epoch 230/500
63s 126ms/step - loss: 0.2281 - acc: 0.9812 - val_loss: 0.4662 - val_acc: 0.9167
Epoch 231/500
63s 126ms/step - loss: 0.2260 - acc: 0.9810 - val_loss: 0.4733 - val_acc: 0.9164
Epoch 232/500
63s 127ms/step - loss: 0.2270 - acc: 0.9799 - val_loss: 0.4643 - val_acc: 0.9190
Epoch 233/500
63s 126ms/step - loss: 0.2190 - acc: 0.9817 - val_loss: 0.4691 - val_acc: 0.9160
Epoch 234/500
63s 126ms/step - loss: 0.2189 - acc: 0.9815 - val_loss: 0.4615 - val_acc: 0.9196
Epoch 235/500
63s 126ms/step - loss: 0.2155 - acc: 0.9821 - val_loss: 0.4510 - val_acc: 0.9188
Epoch 236/500
63s 126ms/step - loss: 0.2123 - acc: 0.9816 - val_loss: 0.4546 - val_acc: 0.9175
Epoch 237/500
63s 126ms/step - loss: 0.2138 - acc: 0.9810 - val_loss: 0.4443 - val_acc: 0.9185
Epoch 238/500
63s 127ms/step - loss: 0.2122 - acc: 0.9809 - val_loss: 0.4674 - val_acc: 0.9143

迭代到第238个epoch,又无意中按到了Ctrl C键(习惯性动作?),中断了程序,又没跑完。准确率已经到了91.43%,估计如果跑完的话,还能再涨一点。然后让办公室的电脑通宵跑程序,自己回家了,结果被同办公室的老师“帮忙”关了电脑。

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht, Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, IEEE Transactions on Industrial Electronics, Date of Publication: 13 Feb. 2020

https://ieeexplore.ieee.org/document/8998530

0 人点赞