机器学习100天|Day1数据预处理

2019-04-08 11:51:13 浏览数 (1)

万事开头难,早就想做这一套教程

最近刚出了一趟长差,终于忙一段落

正文分割线

数据预处理是机器学习中最基础也最麻烦的一部分内容

在我们把精力扑倒各种算法的推导之前,最应该做的就是把数据预处理先搞定

在之后的每个算法实现和案例练手过程中,这一步都必不可少

同学们也不要嫌麻烦,动起手来吧

基础比较好的同学也可以温故知新,再练习一下哈

闲言少叙,下面我们六步完成数据预处理

其实我感觉这里少了一步:观察数据

这是十组国籍、年龄、收入、是否已购买的数据

有分类数据,有数值型数据,还有一些缺失值

看起来是一个分类预测问题

根据国籍、年龄、收入来预测是够会购买

OK,有了大体的认识,开始表演。

Step 1:导入库

代码语言:javascript复制
import numpy as np
import pandas as pd

Step 2:导入数据集

代码语言:javascript复制
dataset = pd.read_csv('Data.csv')
X = dataset.iloc[ : , :-1].values
Y = dataset.iloc[ : , 3].values
print("X")
print(X)
print("Y")
print(Y)

这一步的目的是将自变量和因变量拆成一个矩阵和一个向量。

结果如下

代码语言:javascript复制
X
[['France' 44.0 72000.0 'No']
 ['Spain' 27.0 48000.0 'Yes']
 ['Germany' 30.0 54000.0 'No']
 ['Spain' 38.0 61000.0 'No']
 ['Germany' 40.0 nan 'Yes']
 ['France' 35.0 58000.0 'Yes']
 ['Spain' nan 52000.0 'No']
 ['France' 48.0 79000.0 'Yes']
 ['Germany' 50.0 83000.0 'No']
 ['France' 37.0 67000.0 'Yes']]
Y
['No' 'Yes' 'No' 'No' 'Yes' 'Yes' 'No' 'Yes' 'No' 'Yes']

Step 3:处理缺失数据

代码语言:javascript复制
from sklearn.preprocessing import Imputer
imputer = Imputer(missing_values = "NaN", strategy = "mean", axis = 0)
imputer = imputer.fit(X[ : , 1:3])
X[ : , 1:3] = imputer.transform(X[ : , 1:3])
print("---------------------")
print("Step 3: Handling the missing data")
print("step2")
print("X")
print(X)

Imputer类具体用法移步

http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing

本例中我们用的是均值替代法填充缺失值

运行结果如下

代码语言:javascript复制
X
[['France' 44.0 72000.0]
 ['Spain' 27.0 48000.0]
 ['Germany' 30.0 54000.0]
 ['Spain' 38.0 61000.0]
 ['Germany' 40.0 63777.77777777778]
 ['France' 35.0 58000.0]
 ['Spain' 38.77777777777778 52000.0]
 ['France' 48.0 79000.0]
 ['Germany' 50.0 83000.0]
 ['France' 37.0 67000.0]]

Step 4:把分类数据转换为数字

代码语言:javascript复制
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X = LabelEncoder()
X[ : , 0] = labelencoder_X.fit_transform(X[ : , 0])
#Creating a dummy variable
onehotencoder = OneHotEncoder(categorical_features = [0])
X = onehotencoder.fit_transform(X).toarray()
labelencoder_Y = LabelEncoder()
Y =  labelencoder_Y.fit_transform(Y)
print("X")
print(X)
print("Y")
print(Y)

LabelEncoder用法请移步

http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html

代码语言:javascript复制
X
[[1.00000000e 00 0.00000000e 00 0.00000000e 00 4.40000000e 01
  7.20000000e 04]
 [0.00000000e 00 0.00000000e 00 1.00000000e 00 2.70000000e 01
  4.80000000e 04]
 [0.00000000e 00 1.00000000e 00 0.00000000e 00 3.00000000e 01
  5.40000000e 04]
 [0.00000000e 00 0.00000000e 00 1.00000000e 00 3.80000000e 01
  6.10000000e 04]
 [0.00000000e 00 1.00000000e 00 0.00000000e 00 4.00000000e 01
  6.37777778e 04]
 [1.00000000e 00 0.00000000e 00 0.00000000e 00 3.50000000e 01
  5.80000000e 04]
 [0.00000000e 00 0.00000000e 00 1.00000000e 00 3.87777778e 01
  5.20000000e 04]
 [1.00000000e 00 0.00000000e 00 0.00000000e 00 4.80000000e 01
  7.90000000e 04]
 [0.00000000e 00 1.00000000e 00 0.00000000e 00 5.00000000e 01
  8.30000000e 04]
 [1.00000000e 00 0.00000000e 00 0.00000000e 00 3.70000000e 01
  6.70000000e 04]]
Y
[0 1 0 0 1 1 0 1 0 1]

Step 5:将数据集分为训练集和测试集

代码语言:javascript复制
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split( X , Y , test_size = 0.2, random_state = 0)
print("X_train")
print(X_train)
print("X_test")
print(X_test)
print("Y_train")
print(Y_train)
print("Y_test")
print(Y_test)

train_test_split用法移步

http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html

结果如下

代码语言:javascript复制
X_train
[[0.00000000e 00 1.00000000e 00 0.00000000e 00 4.00000000e 01
  6.37777778e 04]
 [1.00000000e 00 0.00000000e 00 0.00000000e 00 3.70000000e 01
  6.70000000e 04]
 [0.00000000e 00 0.00000000e 00 1.00000000e 00 2.70000000e 01
  4.80000000e 04]
 [0.00000000e 00 0.00000000e 00 1.00000000e 00 3.87777778e 01
  5.20000000e 04]
 [1.00000000e 00 0.00000000e 00 0.00000000e 00 4.80000000e 01
  7.90000000e 04]
 [0.00000000e 00 0.00000000e 00 1.00000000e 00 3.80000000e 01
  6.10000000e 04]
 [1.00000000e 00 0.00000000e 00 0.00000000e 00 4.40000000e 01
  7.20000000e 04]
 [1.00000000e 00 0.00000000e 00 0.00000000e 00 3.50000000e 01
  5.80000000e 04]]
X_test
[[0.0e 00 1.0e 00 0.0e 00 3.0e 01 5.4e 04]
 [0.0e 00 1.0e 00 0.0e 00 5.0e 01 8.3e 04]]
Y_train
[1 1 1 0 1 0 0 1]
Y_test
[0 0]

Step 6:特征缩放

代码语言:javascript复制
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
print("---------------------")
print("Step 6: Feature Scaling")
print("X_train")
print(X_train)
print("X_test")
print(X_test)

大多数机器学习算法在计算中使用两个数据点之间的欧氏距离

特征在幅度、单位和范围上很大的变化,这引起了问题

高数值特征在距离计算中的权重大于低数值特征

通过特征标准化或Z分数归一化来完成

导入sklearn.preprocessing 库中的StandardScala

用法:http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html

代码语言:javascript复制
X_train
[[-1.          2.64575131 -0.77459667  0.26306757  0.12381479]
 [ 1.         -0.37796447 -0.77459667 -0.25350148  0.46175632]
 [-1.         -0.37796447  1.29099445 -1.97539832 -1.53093341]
 [-1.         -0.37796447  1.29099445  0.05261351 -1.11141978]
 [ 1.         -0.37796447 -0.77459667  1.64058505  1.7202972 ]
 [-1.         -0.37796447  1.29099445 -0.0813118  -0.16751412]
 [ 1.         -0.37796447 -0.77459667  0.95182631  0.98614835]
 [ 1.         -0.37796447 -0.77459667 -0.59788085 -0.48214934]]
X_test
[[-1.          2.64575131 -0.77459667 -1.45882927 -0.90166297]
 [-1.          2.64575131 -0.77459667  1.98496442  2.13981082]]

打完收工

如有问题,欢迎留言!

0 人点赞