第9节:BP反向传播网络及其numpy实现

2021-12-29 08:24:57 浏览数 (1)

文章目录

    • BP
      • BP算法步骤
    • numpy复现

BP

  1. 对于输入信号,要先向前传播到隐含层,经过作用函数后,再把隐含神经元的输出信息传播到输出神经元,最后输出结果。
  2. BP网络应用
    1. 函数逼近:用输入矢量和相应的输出矢量训练一个网络,逼近一个函数;
    2. 模式识别:用一个特定的输出矢量,将它与输入矢量联系起来
    3. 分类:把输入矢量以所定义的合适方式进行分类;
    4. 数据压缩:减少输出矢量维数,以便于传输或存储
  3. BP作用函数的要求:
    1. 必须处处可微,不能采用二值型的阈值函数{0,1}或符号函数{-1, 1};
    2. BP使用S型函数或双曲正切函数或线性函数
    3. S型函数具有非线性放大系数功能,它可以把输入从负无穷大到正无穷大的信号,变换成0到1(或-1到l)之间输出
    4. 对于较大的输入信号,方法系数较小;对于较小的输入信号,放大系数则较大,采用S型激活函数可以处理和逼近非线性的输入和输出关系.
    5. 一把情况下隐含层采用S型作用函数,而输出层采用线性作用函数.
  4. 隐含层越多,网络的额精度越高,但是会影响其泛化能力.

BP算法步骤

(1)初始化: eta , alpha ,pass=0, max, me; (2)随机地在[-0.3,0.3]范围内给全部权值和神经的阈值w_{ij}^{(n)} 赋初始值. (3)判段第2层至第M层各个神经元输入端连接权值是否满足:是转(4),否则减少知道满足; (4)P = 0,E_{n},pass 1=pass

numpy复现

  • 题目:设用7个短线段构成共10个数码图形,令这7个线段分别用一个矢量 [b_1,b_2,b_3,b_4,b_5,b_6,b_7] 来代表,又设对数码图形中用到的线段,相应分量取值为1,未用到的线段相应的分量取值为0,因此每个数码图形分别可由一个矢量表示,其顺序编号为:1,2,…,10,试设计一神经网络,能够区分奇数码和偶数码。

X矩阵为:X=[[1,1,0,0,0,0,0],[0,1,1,0,1,1,1],[0,0,1,1,1,1,1],[1,0,1,1,0,1,0],[1,0,0,1,1,1,1],[1,1,0,1,1,1,1],[0,0,1,1,1,0,0],[1,1,1,1,1,1,1],[1,0,1,1,1,1,1]] Y标签特征为Y=[1,0,1,0,1,0,1,0,1]

代码语言:javascript复制
# -*- coding:utf-8 -*-
# /usr/bin/python

import numpy as np
import math

class BP():
    def __init__(self,hidden_n,output_n,learningrate,epoch):
        '''BP参数'''
        self.hidden_n = hidden_n
        self.output_n = output_n
        self.hideWeight = None
        self.outputWeight = None
        self.learningrate = learningrate
        self.inputN = None
        self.hideOutput = None
        self.output = None
        self.loss= None
        self.epoch = epoch
        self.limitloss = 0.01

    def initWeight(self,n, m,fill=0.0):
        '''初始化权值'''
        mat = []
        for i in range(m):
            mat.append([fill] * n)
        mat = np.array(mat)
        mat = mat.transpose()
        return mat

    def sigmoid(self,x):
        '''sigmoid激活函数'''
        return 1.0 / (1.0   np.exp(-x))

    def linear(self,x):
        '''线性作用函数'''
        return x

    def sigmoidDerivative(self,x):
        '''衍生sigmoid'''
        return x-x**2

    def initBp(self,inputN):
        '''初始化BP'''
        self.inputN=inputN 1
        self.hideOutput = self.hidden_n 1

        #init weight
        self.hideWeight = self.initWeight(self.inputN,self.hidden_n 1)
        self.outputWeight = self.initWeight(self.hidden_n 1,self.output_n)

    def forwardPropagation(self,X):
        '''前向传播'''
        self.hideOutput = self.sigmoid(np.dot(X,self.hideWeight))
        # self.hideOutput = np.c_[self.hideOutput, np.ones(self.hideOutput.shape[0])]# 增加一列 为了bias
        self.output = self.sigmoid(np.dot(self.hideOutput, self.outputWeight))

    def lossFun(self,Y):
        '''损失函数'''
        self.loss = 0.5*np.sum((Y - self.output) * (Y - self.output))
        return self.loss

    def backPropagation(self,X,Y):
        self.outputWeight = self.outputWeight.transpose()
        outputWeightbiassum= 0
        for i in range(self.output_n):
            outputWeightbias = self.learningrate * (self.output[i] - Y[i]) * self.sigmoidDerivative(self.output[i]) * self.hideOutput
            self.outputWeight[i,:]  =  outputWeightbias
            outputWeightbiassum -= outputWeightbias
        self.outputWeight = self.outputWeight.transpose()

        self.hideWeight = self.hideWeight.transpose()
        for i in range(self.hidden_n 1):
            hideWeightbias = self.learningrate*outputWeightbias[i]*self.sigmoidDerivative(self.hideOutput[i])*X
            self.hideWeight[i, :] -= hideWeightbias
        self.hideWeight = self.hideWeight.transpose()

    def train(self,X,Y):
        '''训练'''
        inputN = X.shape[1]
        samplesN= X.shape[0]
        X = np.c_[X,np.ones(samplesN)] # 增加一列 为了bias
        self.initBp(inputN)
        for i in range(self.epoch):
            for one in range(samplesN):
                x,y = X[one,:],Y[one,:]
                self.forwardPropagation(x)
                loss = self.lossFun(y)
                if loss<= self.limitloss:
                    break
                else:
                    self.backPropagation(x,y)
                i  =1

    def predict(self,X):
        '''预测'''
        samplesN = X.shape[0]
        X = np.c_[X, np.ones(samplesN)]  # 增加一列 为了bias
        for one in range(samplesN):
            x= X[one, :]
            self.forwardPropagation(x)
            print(self.output)

X=[[1,1,0,0,0,0,0],[0,1,1,0,1,1,1],[0,0,1,1,1,1,1],[1,0,1,1,0,1,0],[1,0,0,1,1,1,1],[1,1,0,1,1,1,1],[0,0,1,1,1,0,0],[1,1,1,1,1,1,1],[1,0,1,1,1,1,1]]
Y=[[1],[0],[1],[0],[1],[0],[1],[0],[1]]

xtest = [[1,1,0,0,0,0,0],[0,1,1,0,1,1,1]]
print(X,"n",Y)
XTrain = np.array(X)
YTrain = np.array(Y)
xtest = np.array(xtest)
print(XTrain.shape[1])
print(XTrain)

hidden_n,output_n,learningrate,epoch = 3,1,0.5,1000
newbp = BP(hidden_n,output_n,learningrate,epoch)
newbp.train(XTrain,YTrain)
newbp.predict(xtest)

0 人点赞