Keras/Python深度学习中的网格搜索超参数调优(下)

2018-06-06 13:16:59 浏览数 (1)

如何调优网络权值初始化

神经网络权值初始化一度十分简单:采用小的随机数即可。

现在,有许多不同的技术可供选择。点击此处查看Keras 提供的清单。

在本例中,我们将着眼于通过评估所有可用的技术,来调优网络权值初始化的选择。

我们将在每一层采用相同的权值初始化方法。理想情况下,根据每层使用的激活函数选用不同的权值初始化方法效果可能更好。在下面的例子中,我们在隐藏层使用了整流器(rectifier)。因为预测是二进制,因此在输出层使用了sigmoid函数。

完整代码如下:

代码语言:javascript复制
# Use scikit-learn to grid search the weight initializationimport numpyfrom sklearn.grid_search import GridSearchCVfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.wrappers.scikit_learn import KerasClassifier# Function to create model, required for KerasClassifierdef create_model(init_mode='uniform'):
    # create model
    model = Sequential()
    model.add(Dense(12, input_dim=8, init=init_mode, activation='relu'))
    model.add(Dense(1, init=init_mode, activation='sigmoid'))    # Compile model
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])    return model# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# load datasetdataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")# split into input (X) and output (Y) variablesX = dataset[:,0:8]
Y = dataset[:,8]# create modelmodel = KerasClassifier(build_fn=create_model, nb_epoch=100, batch_size=10, verbose=0)# define the grid search parametersinit_mode = ['uniform', 'lecun_uniform', 'normal', 'zero', 'glorot_normal', 'glorot_uniform', 'he_normal', 'he_uniform']
param_grid = dict(init_mode=init_mode)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(X, Y)# summarize resultsprint("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))for params, mean_score, scores in grid_result.grid_scores_:
    print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))

运行之后输出如下:

代码语言:javascript复制
Best: 0.720052 using {'init_mode': 'uniform'}0.720052 (0.024360) with: {'init_mode': 'uniform'}0.348958 (0.024774) with: {'init_mode': 'lecun_uniform'}0.712240 (0.012075) with: {'init_mode': 'normal'}0.651042 (0.024774) with: {'init_mode': 'zero'}0.700521 (0.010253) with: {'init_mode': 'glorot_normal'}0.674479 (0.011201) with: {'init_mode': 'glorot_uniform'}0.661458 (0.028940) with: {'init_mode': 'he_normal'}0.678385 (0.004872) with: {'init_mode': 'he_uniform'}

我们可以看到,当采用均匀权值初始化方案(uniform weight initialization )时取得最好的结果,可以实现约72%的性能。

如何选择神经元激活函数

激活函数控制着单个神经元的非线性以及何时激活。

通常来说,整流器(rectifier)的激活功能是最受欢迎的,但应对不同的问题, sigmoid函数和tanh 函数可能是更好的选择。

在本例中,我们将探讨、评估、比较Keras提供的不同类型的激活函数。我们仅在隐层中使用这些函数。考虑到二元分类问题,需要在输出层使用sigmoid激活函数。

通常而言,为不同范围的传递函数准备数据是一个好主意,但在本例中我们不会这么做。

完整代码如下:

代码语言:javascript复制
# Use scikit-learn to grid search the activation functionimport numpyfrom sklearn.grid_search import GridSearchCVfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.wrappers.scikit_learn import KerasClassifier# Function to create model, required for KerasClassifierdef create_model(activation='relu'):
    # create model
    model = Sequential()
    model.add(Dense(12, input_dim=8, init='uniform', activation=activation))
    model.add(Dense(1, init='uniform', activation='sigmoid'))    # Compile model
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])    return model# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# load datasetdataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")# split into input (X) and output (Y) variablesX = dataset[:,0:8]
Y = dataset[:,8]# create modelmodel = KerasClassifier(build_fn=create_model, nb_epoch=100, batch_size=10, verbose=0)# define the grid search parametersactivation = ['softmax', 'softplus', 'softsign', 'relu', 'tanh', 'sigmoid', 'hard_sigmoid', 'linear']
param_grid = dict(activation=activation)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(X, Y)# summarize resultsprint("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))for params, mean_score, scores in grid_result.grid_scores_:
    print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))

运行之后输出如下:

代码语言:javascript复制
Best: 0.722656 using {'activation': 'linear'}0.649740 (0.009744) with: {'activation': 'softmax'}0.720052 (0.032106) with: {'activation': 'softplus'}0.688802 (0.019225) with: {'activation': 'softsign'}0.720052 (0.018136) with: {'activation': 'relu'}0.691406 (0.019401) with: {'activation': 'tanh'}0.680990 (0.009207) with: {'activation': 'sigmoid'}0.691406 (0.014616) with: {'activation': 'hard_sigmoid'}0.722656 (0.003189) with: {'activation': 'linear'}

令人惊讶的是(至少对我来说是),“线性(linear)”激活函数取得了最好的效果,准确率约为72%。

如何调优Dropout正则化

在本例中,我们将着眼于调整正则化中的dropout速率,以期限制过拟合(overfitting)和提高模型的泛化能力。为了得到较好的结果,dropout最好结合一个如最大范数约束之类的权值约束。

了解更多dropout在深度学习框架Keras的使用请查看下面这篇文章:

  • 基于Keras/Python的深度学习模型Dropout正则项

它涉及到拟合dropout率和权值约束。我们选定dropout percentages取值范围是:0.0-0.9(1.0无意义);最大范数权值约束( maxnorm weight constraint)的取值范围是0-5。

完整代码如下:

代码语言:javascript复制
# Use scikit-learn to grid search the dropout rateimport numpyfrom sklearn.grid_search import GridSearchCVfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import Dropoutfrom keras.wrappers.scikit_learn import KerasClassifierfrom keras.constraints import maxnorm# Function to create model, required for KerasClassifierdef create_model(dropout_rate=0.0, weight_constraint=0):
    # create model
    model = Sequential()
    model.add(Dense(12, input_dim=8, init='uniform', activation='linear', W_constraint=maxnorm(weight_constraint)))
    model.add(Dropout(dropout_rate))
    model.add(Dense(1, init='uniform', activation='sigmoid'))    # Compile model
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])    return model# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# load datasetdataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")# split into input (X) and output (Y) variablesX = dataset[:,0:8]
Y = dataset[:,8]# create modelmodel = KerasClassifier(build_fn=create_model, nb_epoch=100, batch_size=10, verbose=0)# define the grid search parametersweight_constraint = [1, 2, 3, 4, 5]
dropout_rate = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
param_grid = dict(dropout_rate=dropout_rate, weight_constraint=weight_constraint)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(X, Y)# summarize resultsprint("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))for params, mean_score, scores in grid_result.grid_scores_:
    print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))

运行之后输出如下:

代码语言:javascript复制
Best: 0.723958 using {'dropout_rate': 0.2, 'weight_constraint': 4}0.696615 (0.031948) with: {'dropout_rate': 0.0, 'weight_constraint': 1}0.696615 (0.031948) with: {'dropout_rate': 0.0, 'weight_constraint': 2}0.691406 (0.026107) with: {'dropout_rate': 0.0, 'weight_constraint': 3}0.708333 (0.009744) with: {'dropout_rate': 0.0, 'weight_constraint': 4}0.708333 (0.009744) with: {'dropout_rate': 0.0, 'weight_constraint': 5}0.710937 (0.008438) with: {'dropout_rate': 0.1, 'weight_constraint': 1}0.709635 (0.007366) with: {'dropout_rate': 0.1, 'weight_constraint': 2}0.709635 (0.007366) with: {'dropout_rate': 0.1, 'weight_constraint': 3}0.695312 (0.012758) with: {'dropout_rate': 0.1, 'weight_constraint': 4}0.695312 (0.012758) with: {'dropout_rate': 0.1, 'weight_constraint': 5}0.701823 (0.017566) with: {'dropout_rate': 0.2, 'weight_constraint': 1}0.710938 (0.009568) with: {'dropout_rate': 0.2, 'weight_constraint': 2}0.710938 (0.009568) with: {'dropout_rate': 0.2, 'weight_constraint': 3}0.723958 (0.027126) with: {'dropout_rate': 0.2, 'weight_constraint': 4}0.718750 (0.030425) with: {'dropout_rate': 0.2, 'weight_constraint': 5}0.721354 (0.032734) with: {'dropout_rate': 0.3, 'weight_constraint': 1}0.707031 (0.036782) with: {'dropout_rate': 0.3, 'weight_constraint': 2}0.707031 (0.036782) with: {'dropout_rate': 0.3, 'weight_constraint': 3}0.694010 (0.019225) with: {'dropout_rate': 0.3, 'weight_constraint': 4}0.709635 (0.006639) with: {'dropout_rate': 0.3, 'weight_constraint': 5}0.704427 (0.008027) with: {'dropout_rate': 0.4, 'weight_constraint': 1}0.717448 (0.031304) with: {'dropout_rate': 0.4, 'weight_constraint': 2}0.718750 (0.030425) with: {'dropout_rate': 0.4, 'weight_constraint': 3}0.718750 (0.030425) with: {'dropout_rate': 0.4, 'weight_constraint': 4}0.722656 (0.029232) with: {'dropout_rate': 0.4, 'weight_constraint': 5}0.720052 (0.028940) with: {'dropout_rate': 0.5, 'weight_constraint': 1}0.703125 (0.009568) with: {'dropout_rate': 0.5, 'weight_constraint': 2}0.716146 (0.029635) with: {'dropout_rate': 0.5, 'weight_constraint': 3}0.709635 (0.008027) with: {'dropout_rate': 0.5, 'weight_constraint': 4}0.703125 (0.011500) with: {'dropout_rate': 0.5, 'weight_constraint': 5}0.707031 (0.017758) with: {'dropout_rate': 0.6, 'weight_constraint': 1}0.701823 (0.018688) with: {'dropout_rate': 0.6, 'weight_constraint': 2}0.701823 (0.018688) with: {'dropout_rate': 0.6, 'weight_constraint': 3}0.690104 (0.027498) with: {'dropout_rate': 0.6, 'weight_constraint': 4}0.695313 (0.022326) with: {'dropout_rate': 0.6, 'weight_constraint': 5}0.697917 (0.014382) with: {'dropout_rate': 0.7, 'weight_constraint': 1}0.697917 (0.014382) with: {'dropout_rate': 0.7, 'weight_constraint': 2}0.687500 (0.008438) with: {'dropout_rate': 0.7, 'weight_constraint': 3}0.704427 (0.011201) with: {'dropout_rate': 0.7, 'weight_constraint': 4}0.696615 (0.016367) with: {'dropout_rate': 0.7, 'weight_constraint': 5}0.680990 (0.025780) with: {'dropout_rate': 0.8, 'weight_constraint': 1}0.699219 (0.019401) with: {'dropout_rate': 0.8, 'weight_constraint': 2}0.701823 (0.015733) with: {'dropout_rate': 0.8, 'weight_constraint': 3}0.684896 (0.023510) with: {'dropout_rate': 0.8, 'weight_constraint': 4}0.696615 (0.017566) with: {'dropout_rate': 0.8, 'weight_constraint': 5}0.653646 (0.034104) with: {'dropout_rate': 0.9, 'weight_constraint': 1}0.677083 (0.012075) with: {'dropout_rate': 0.9, 'weight_constraint': 2}0.679688 (0.013902) with: {'dropout_rate': 0.9, 'weight_constraint': 3}0.669271 (0.017566) with: {'dropout_rate': 0.9, 'weight_constraint': 4}0.669271 (0.012075) with: {'dropout_rate': 0.9, 'weight_constraint': 5}

我们可以看到,当 dropout率为0.2%、最大范数权值约束( maxnorm weight constraint)取值为4时,可以取得准确率约为72%的最好结果。

如何确定隐藏层中的神经元的数量

每一层中的神经元数目是一个非常重要的参数。通常情况下,一层之中的神经元数目控制着网络的代表性容量,至少是拓扑结构某一节点的容量。

此外,一般来说,一个足够大的单层网络是接近于任何神经网络的,至少在理论上成立。

在本例中,我们将着眼于调整单个隐藏层神经元的数量。取值范围是:1—30,步长为5。

一个大型网络要求更多的训练,此外,至少批尺寸(batch size)和 epoch的数量应该与神经元的数量优化。

完整代码如下:

代码语言:javascript复制
# Use scikit-learn to grid search the number of neuronsimport numpyfrom sklearn.grid_search import GridSearchCVfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import Dropoutfrom keras.wrappers.scikit_learn import KerasClassifierfrom keras.constraints import maxnorm# Function to create model, required for KerasClassifierdef create_model(neurons=1):
    # create model
    model = Sequential()
    model.add(Dense(neurons, input_dim=8, init='uniform', activation='linear', W_constraint=maxnorm(4)))
    model.add(Dropout(0.2))
    model.add(Dense(1, init='uniform', activation='sigmoid'))    # Compile model
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])    return model# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# load datasetdataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")# split into input (X) and output (Y) variablesX = dataset[:,0:8]
Y = dataset[:,8]# create modelmodel = KerasClassifier(build_fn=create_model, nb_epoch=100, batch_size=10, verbose=0)# define the grid search parametersneurons = [1, 5, 10, 15, 20, 25, 30]
param_grid = dict(neurons=neurons)
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
grid_result = grid.fit(X, Y)# summarize resultsprint("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))for params, mean_score, scores in grid_result.grid_scores_:
    print("%f (%f) with: %r" % (scores.mean(), scores.std(), params))

运行之后输出如下:

代码语言:javascript复制
Best: 0.714844 using {'neurons': 5}0.700521 (0.011201) with: {'neurons': 1}0.714844 (0.011049) with: {'neurons': 5}0.712240 (0.017566) with: {'neurons': 10}0.705729 (0.003683) with: {'neurons': 15}0.696615 (0.020752) with: {'neurons': 20}0.713542 (0.025976) with: {'neurons': 25}0.705729 (0.008027) with: {'neurons': 30}

我们可以看到,当网络中隐藏层内神经元的个数为5时,可以达到最佳结果,准确性约为71%。

超参数优化的小技巧

本节罗列了一些神经网络超参数调整时常用的小技巧。

  • K层交叉检验(k-fold Cross Validation),你可以看到,本文中的不同示例的结果存在一些差异。使用了默认的3层交叉验证,但也许K=5或者K=10时会更加稳定。认真选择您的交叉验证配置,以确保您的结果是稳定的。
  • 审查整个网络。不要只注意最好的结果,审查整个网络的结果,并寻找支持配置决策的趋势。
  • 并行(Parallelize),如果可以,使用全部的CPU,神经网络训练十分缓慢,并且我们经常想尝试不同的参数。参考AWS实例。
  • 使用数据集的样本。由于神经网路的训练十分缓慢,尝试训练在您训练数据集中较小样本,得到总方向的一般参数即可,并非追求最佳的配置。
  • 从粗网格入手。从粗粒度网格入手,并且一旦缩小范围,就细化为细粒度网格。
  • 不要传递结果。结果通常是特定问题。尽量避免在每一个新问题上都采用您最喜欢的配置。你不可能将一个问题的最佳结果转移到另一个问题之上。相反地,你应该归纳更广泛的趋势,例如层的数目或者是参数之间的关系。
  • 再现性(Reproducibility)是一个问题。在NumPy中,尽管我们为随机数发生器设置了种子,但结果并非百分百重现。网格搜索wrapped Keras模型将比本文中所示Keras模型展现更多可重复性(reproducibility)。

总结

在这篇文章中,你可以了解到如何使用Keras和scikit-learn/Python调优神经网络中的超参数。

尤其是可以学到:

  • 如何包装Keras模型以便在scikit-learn使用以及如何使用网格搜索。
  • 如何网格搜索Keras 模型中不同标准的神经网络参数。
  • 如何设计自己的超参数优化实验。

您有过大型神经网络超参数调优的经历吗?如果有,请投稿至zhoujd@csdn.net分享您的故事和经验。

0 人点赞