概述
有时候我们需要预测连续值的映射关系,比如房价预测问题。不想之前的是几个类别,它的输出值是实数。这个时候一般通过线性回归算法进行拟合。
线性回归
h_theta(x)=theta_0 theta_1x
上面这个例子是针对数据集x和y,预测函数根据数据输入x会预测出h(x),我们的目的是找出一个合适θ参数值,是的预测值h(x)和y值的整体误差最小。我们一般通过均方差成本函数来衡量模型对训练样本拟合的好坏程度。如下:
J(theta)=J(theta_0,theta_1)=frac 1{2m}sum_{i=1}^m(h(x^{(i)})-y^{(i)})^2
为了使得我们成本函数最小化,进而转化成了数学上的最优化求解问题。我们需要找到合适θ,使得成本函数最小化。我们一般通过梯度下降算法来解决该问题。可以理解为找到一个全局最优解,一般数学上的最优解是通过导数的变化率来找到全局最优解,梯度下降算法是通过迭代的方式不断地沿着下降最快的方向进行寻找最优解。即:
theta(j)=theta(i)- alphafrac{partial}{partialtheta_i}J(theta)
线性回归实例
我这边通过深度学习和机器学习来看两个实例。
代码语言:javascript复制import tensorflow as tf
import numpy as np
import mxnet as mx
from mxnet import gluon
from mxnet import ndarray as nd
from matplotlib import pyplot as plt
from numpy import genfromtxt
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
import logging
%matplotlib inline
logging.getLogger().setLevel(logging.DEBUG)
def pre_boston_data():
'''准备boston的数据,构造训练数据集'''
boston = load_boston()
features = np.array(boston.data)
lables = np.array(boston.target)
return features,lables
def normalizer(dataset):
'''对数据进行归一化'''
mean = np.mean(dataset,axis=0)
std = np.std(dataset,axis=0)
return (dataset-mean)/std
def bias_vector(features,lables):
'''规范合理化数据集'''
n_training_samples = features.shape[0]
n_dim = features.shape[1]
ones = np.ones(n_training_samples)
combine = np.c_[ones,features]
f=np.reshape(combine,[n_training_samples,n_dim 1])
l=np.reshape(labels,[n_training_samples,1])
return f,l
#数据预处理和规范化数据集
features,labels = pre_boston_data()
normolized_feature = normalizer(features)
data,label = bias_vector(normolized_feature,labels)
train_x,test_x,train_y,test_y = train_test_split(data,label,test_size = 0.25,random_state = 100)
n_dim = train_x.shape[1]
#开始训练数据集和构造模型算法
learning_rate = 0.01
training_epochs = 1000
log_loss = np.empty(shape=[1],dtype=float)
X = tf.placeholder(tf.float32,[None,n_dim])
Y = tf.placeholder(tf.float32,[None,1])
W = tf.Variable(tf.ones([n_dim,1]))
y_ = tf.matmul(X, W)
cost_op = tf.reduce_mean(tf.square(y_ - Y))
training_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_op)
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(training_epochs):
sess.run(training_step,feed_dict={X:train_x,Y:train_y})
log_loss = np.append(log_loss,sess.run(cost_op,feed_dict={X: train_x,Y: train_y}))
pred_y = sess.run(y_, feed_dict={X: test_x})
mse = tf.reduce_mean(tf.square(pred_y-test_y))
fig, ax = plt.subplots()
ax.scatter(test_y, pred_y)
ax.plot([test_y.min(), test_y.max()], [test_y.min(), test_y.max()], 'k--', lw=3)
ax.set_xlabel('Measured')
ax.set_ylabel('Predicted')
最终我们的结果如下图所示: