【机器学习】决策树代码练习

2022-06-02 10:37:13 浏览数 (1)

以下文章来源于机器学习初学者 ,作者机器学习初学者

本课程是中国大学慕课《机器学习》的“决策树”章节的课后代码。 课程地址: https://www.icourse163.org/course/WZU-1464096179

1.分类决策树模型是表示基于特征对实例进行分类的树形结构。决策树可以转换成一个if-then规则的集合,也可以看作是定义在特征空间划分上的类的条件概率分布。

2.决策树学习旨在构建一个与训练数据拟合很好,并且复杂度小的决策树。因为从可能的决策树中直接选取最优决策树是NP完全问题。现实中采用启发式方法学习次优的决策树。

决策树学习算法包括3部分:特征选择、树的生成和树的剪枝。常用的算法有ID3、 C4.5和CART。

3.特征选择的目的在于选取对训练数据能够分类的特征。特征选择的关键是其准则。常用的准则如下:

4.决策树的生成。通常使用信息增益最大、信息增益比最大或基尼指数最小作为特征选择的准则。决策树的生成往往通过计算信息增益或其他指标,从根结点开始,递归地产生决策树。这相当于用信息增益或其他准则不断地选取局部最优的特征,或将训练集分割为能够基本正确分类的子集。

5.决策树的剪枝。由于生成的决策树存在过拟合问题,需要对它进行剪枝,以简化学到的决策树。决策树的剪枝,往往从已生成的树上剪掉一些叶结点或叶结点以上的子树,并将其父结点或根结点作为新的叶结点,从而简化生成的决策树。

代码语言:javascript复制
import numpy as np
import pandas as pd
import math
from math import log

创建数据

代码语言:javascript复制
def create_data():
    datasets = [['青年', '否', '否', '一般', '否'],
               ['青年', '否', '否', '好', '否'],
               ['青年', '是', '否', '好', '是'],
               ['青年', '是', '是', '一般', '是'],
               ['青年', '否', '否', '一般', '否'],
               ['中年', '否', '否', '一般', '否'],
               ['中年', '否', '否', '好', '否'],
               ['中年', '是', '是', '好', '是'],
               ['中年', '否', '是', '非常好', '是'],
               ['中年', '否', '是', '非常好', '是'],
               ['老年', '否', '是', '非常好', '是'],
               ['老年', '否', '是', '好', '是'],
               ['老年', '是', '否', '好', '是'],
               ['老年', '是', '否', '非常好', '是'],
               ['老年', '否', '否', '一般', '否'],
               ]
    labels = [u'年龄', u'有工作', u'有自己的房子', u'信贷情况', u'类别']
    # 返回数据集和每个维度的名称
    return datasets, labels
代码语言:javascript复制
datasets, labels = create_data()
代码语言:javascript复制
train_data = pd.DataFrame(datasets, columns=labels)
代码语言:javascript复制
train_data

年龄

有工作

有自己的房子

信贷情况

类别

0

青年

一般

1

青年

2

青年

3

青年

一般

4

青年

一般

5

中年

一般

6

中年

7

中年

8

中年

非常好

9

中年

非常好

10

老年

非常好

11

老年

12

老年

13

老年

非常好

14

老年

一般

代码语言:javascript复制
def calc_ent(datasets):
    data_length = len(datasets)
    label_count = {}
    for i in range(data_length):
        label = datasets[i][-1]
        if label not in label_count:
            label_count[label] = 0
        label_count[label]  = 1
    ent = -sum([(p / data_length) * log(p / data_length, 2)
                for p in label_count.values()])
    return ent

条件熵

代码语言:javascript复制
def cond_ent(datasets, axis=0):
    data_length = len(datasets)
    feature_sets = {}
    for i in range(data_length):
        feature = datasets[i][axis]
        if feature not in feature_sets:
            feature_sets[feature] = []
        feature_sets[feature].append(datasets[i])
    cond_ent = sum([(len(p) / data_length) * calc_ent(p)
                    for p in feature_sets.values()])
    return cond_ent
代码语言:javascript复制
calc_ent(datasets)
代码语言:javascript复制
0.9709505944546686

信息增益

代码语言:javascript复制
def info_gain(ent, cond_ent):
    return ent - cond_ent
代码语言:javascript复制
def info_gain_train(datasets):
    count = len(datasets[0]) - 1
    ent = calc_ent(datasets)
    best_feature = []
    for c in range(count):
        c_info_gain = info_gain(ent, cond_ent(datasets, axis=c))
        best_feature.append((c, c_info_gain))
        print('特征({}) 的信息增益为: {:.3f}'.format(labels[c], c_info_gain))
    # 比较大小
    best_ = max(best_feature, key=lambda x: x[-1])
    return '特征({})的信息增益最大,选择为根节点特征'.format(labels[best_[0]])
代码语言:javascript复制
info_gain_train(np.array(datasets))
代码语言:javascript复制
特征(年龄) 的信息增益为:0.083
特征(有工作) 的信息增益为:0.324
特征(有自己的房子) 的信息增益为:0.420
特征(信贷情况) 的信息增益为:0.363


'特征(有自己的房子)的信息增益最大,选择为根节点特征'

利用ID3算法生成决策树

代码语言:javascript复制
# 定义节点类 二叉树
class Node:
    def __init__(self, root=True, label=None, feature_name=None, feature=None):
        self.root = root
        self.label = label
        self.feature_name = feature_name
        self.feature = feature
        self.tree = {}
        self.result = {
            'label:': self.label,
            'feature': self.feature,
            'tree': self.tree
        }

    def __repr__(self):
        return '{}'.format(self.result)

    def add_node(self, val, node):
        self.tree[val] = node

    def predict(self, features):
        if self.root is True:
            return self.label
        return self.tree[features[self.feature]].predict(features)


class DTree:
    def __init__(self, epsilon=0.1):
        self.epsilon = epsilon
        self._tree = {}

    # 熵
    @staticmethod
    def calc_ent(datasets):
        data_length = len(datasets)
        label_count = {}
        for i in range(data_length):
            label = datasets[i][-1]
            if label not in label_count:
                label_count[label] = 0
            label_count[label]  = 1
        ent = -sum([(p / data_length) * log(p / data_length, 2)
                    for p in label_count.values()])
        return ent

    # 经验条件熵
    def cond_ent(self, datasets, axis=0):
        data_length = len(datasets)
        feature_sets = {}
        for i in range(data_length):
            feature = datasets[i][axis]
            if feature not in feature_sets:
                feature_sets[feature] = []
            feature_sets[feature].append(datasets[i])
        cond_ent = sum([(len(p) / data_length) * self.calc_ent(p)
                        for p in feature_sets.values()])
        return cond_ent

    # 信息增益
    @staticmethod
    def info_gain(ent, cond_ent):
        return ent - cond_ent

    def info_gain_train(self, datasets):
        count = len(datasets[0]) - 1
        ent = self.calc_ent(datasets)
        best_feature = []
        for c in range(count):
            c_info_gain = self.info_gain(ent, self.cond_ent(datasets, axis=c))
            best_feature.append((c, c_info_gain))
        # 比较大小
        best_ = max(best_feature, key=lambda x: x[-1])
        return best_

    def train(self, train_data):
        """
        input:数据集D(DataFrame格式),特征集A,阈值eta
        output:决策树T
        """
        _, y_train, features = train_data.iloc[:, :
                                               -1], train_data.iloc[:,
                                                                    -1], train_data.columns[:
                                                                                            -1]
        # 1,若D中实例属于同一类Ck,则T为单节点树,并将类Ck作为结点的类标记,返回T
        if len(y_train.value_counts()) == 1:
            return Node(root=True, label=y_train.iloc[0])

        # 2, 若A为空,则T为单节点树,将D中实例树最大的类Ck作为该节点的类标记,返回T
        if len(features) == 0:
            return Node(
                root=True,
                label=y_train.value_counts().sort_values(
                    ascending=False).index[0])

        # 3,计算最大信息增益 同5.1,Ag为信息增益最大的特征
        max_feature, max_info_gain = self.info_gain_train(np.array(train_data))
        max_feature_name = features[max_feature]

        # 4,Ag的信息增益小于阈值eta,则置T为单节点树,并将D中是实例数最大的类Ck作为该节点的类标记,返回T
        if max_info_gain < self.epsilon:
            return Node(
                root=True,
                label=y_train.value_counts().sort_values(
                    ascending=False).index[0])

        # 5,构建Ag子集
        node_tree = Node(
            root=False, feature_name=max_feature_name, feature=max_feature)

        feature_list = train_data[max_feature_name].value_counts().index
        for f in feature_list:
            sub_train_df = train_data.loc[train_data[max_feature_name] ==
                                          f].drop([max_feature_name], axis=1)

            # 6, 递归生成树
            sub_tree = self.train(sub_train_df)
            node_tree.add_node(f, sub_tree)

        # pprint.pprint(node_tree.tree)
        return node_tree

    def fit(self, train_data):
        self._tree = self.train(train_data)
        return self._tree

    def predict(self, X_test):
        return self._tree.predict(X_test)
代码语言:javascript复制
datasets, labels = create_data()
data_df = pd.DataFrame(datasets, columns=labels)
dt = DTree()
tree = dt.fit(data_df)
代码语言:javascript复制
tree
代码语言:javascript复制
{'label:': None, 'feature': 2, 'tree': {'否': {'label:': None, 'feature': 1, 'tree': {'否': {'label:': '否', 'feature': None, 'tree': {}}, '是': {'label:': '是', 'feature': None, 'tree': {}}}}, '是': {'label:': '是', 'feature': None, 'tree': {}}}}
代码语言:javascript复制
dt.predict(['老年', '否', '否', '一般'])
代码语言:javascript复制
'否'

Scikit-learn实例

代码语言:javascript复制
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from collections import Counter

使用Iris数据集,我们可以构建如下树:

代码语言:javascript复制
# data
def create_data():
    iris = load_iris()
    df = pd.DataFrame(iris.data, columns=iris.feature_names)
    df['label'] = iris.target
    df.columns = [
        'sepal length', 'sepal width', 'petal length', 'petal width', 'label'
    ]
    data = np.array(df.iloc[:100, [0, 1, -1]])
    # print(data)
    return data[:, :2], data[:, -1],iris.feature_names[0:2]


X, y,feature_name= create_data()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)

决策树分类

代码语言:javascript复制
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_graphviz
import graphviz
from sklearn import tree

clf = DecisionTreeClassifier()
clf.fit(X_train, y_train,)

clf.score(X_test, y_test)
代码语言:javascript复制
0.9666666666666667

一旦经过训练,就可以用 plot_tree函数绘制树:

代码语言:javascript复制
tree.plot_tree(clf) 
代码语言:javascript复制
[Text(197.83636363636364, 195.696, 'X[0] <= 5.45ngini = 0.5nsamples = 70nvalue = [36, 34]'),
 Text(121.74545454545455, 152.208, 'X[1] <= 2.8ngini = 0.157nsamples = 35nvalue = [32, 3]'),
 Text(60.872727272727275, 108.72, 'X[0] <= 4.75ngini = 0.444nsamples = 3nvalue = [1, 2]'),
 Text(30.436363636363637, 65.232, 'gini = 0.0nsamples = 1nvalue = [1, 0]'),
 Text(91.30909090909091, 65.232, 'gini = 0.0nsamples = 2nvalue = [0, 2]'),
 Text(182.61818181818182, 108.72, 'X[0] <= 5.3ngini = 0.061nsamples = 32nvalue = [31, 1]'),
 Text(152.1818181818182, 65.232, 'gini = 0.0nsamples = 29nvalue = [29, 0]'),
 Text(213.05454545454546, 65.232, 'X[1] <= 3.2ngini = 0.444nsamples = 3nvalue = [2, 1]'),
 Text(182.61818181818182, 21.744, 'gini = 0.0nsamples = 1nvalue = [0, 1]'),
 Text(243.4909090909091, 21.744, 'gini = 0.0nsamples = 2nvalue = [2, 0]'),
 Text(273.92727272727274, 152.208, 'X[1] <= 3.5ngini = 0.202nsamples = 35nvalue = [4, 31]'),
 Text(243.4909090909091, 108.72, 'gini = 0.0nsamples = 31nvalue = [0, 31]'),
 Text(304.3636363636364, 108.72, 'gini = 0.0nsamples = 4nvalue = [4, 0]')]

也可以导出树

代码语言:javascript复制
tree_pic = export_graphviz(clf, out_file="mytree.pdf")
with open('mytree.pdf') as f:
    dot_graph = f.read()
代码语言:javascript复制
graphviz.Source(dot_graph)

或者,还可以使用函数 export_text以文本格式导出树。此方法不需要安装外部库,而且更紧凑:

代码语言:javascript复制
from sklearn.tree import export_text
代码语言:javascript复制
r = export_text(clf,feature_name)
代码语言:javascript复制
print(r)
代码语言:javascript复制
|--- sepal width (cm) <= 3.15
|   |--- sepal length (cm) <= 4.95
|   |   |--- sepal width (cm) <= 2.65
|   |   |   |--- class: 1.0
|   |   |--- sepal width (cm) >  2.65
|   |   |   |--- class: 0.0
|   |--- sepal length (cm) >  4.95
|   |   |--- class: 1.0
|--- sepal width (cm) >  3.15
|   |--- sepal length (cm) <= 5.85
|   |   |--- class: 0.0
|   |--- sepal length (cm) >  5.85
|   |   |--- class: 1.0

决策树回归

代码语言:javascript复制
import numpy as np
from sklearn.tree import DecisionTreeRegressor
import matplotlib.pyplot as plt
代码语言:javascript复制
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5]  = 3 * (0.5 - rng.rand(16))
代码语言:javascript复制
# Fit regression model
regr_1 = DecisionTreeRegressor(max_depth=2)
regr_2 = DecisionTreeRegressor(max_depth=5)
regr_1.fit(X, y)
regr_2.fit(X, y)

# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_1 = regr_1.predict(X_test)
y_2 = regr_2.predict(X_test)

# Plot the results
plt.figure()
plt.scatter(X, y, s=20, edgecolor="black", c="darkorange", label="data")
plt.plot(X_test, y_1, color="cornflowerblue", label="max_depth=2", linewidth=2)
plt.plot(X_test, y_2, color="yellowgreen", label="max_depth=5", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()

决策树调参

代码语言:javascript复制
# 导入库
from sklearn.tree import DecisionTreeClassifier
from sklearn import datasets
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeRegressor
from sklearn import metrics
代码语言:javascript复制
# 导入数据集
X = datasets.load_iris()  # 以全部字典形式返回,有data,target,target_names三个键
data = X.data
target = X.target
name = X.target_names
x, y = datasets.load_iris(return_X_y=True)  # 能一次性取前2个
print(x.shape, y.shape)
代码语言:javascript复制
(150, 4) (150,)
代码语言:javascript复制
# 数据分为训练集和测试集
x_train, x_test, y_train, y_test = train_test_split(x,
                                                    y,
                                                    test_size=0.2,
                                                    random_state=100)
代码语言:javascript复制
# 用GridSearchCV寻找最优参数(字典)
param = {
    'criterion': ['gini'],
    'max_depth': [30, 50, 60, 100],
    'min_samples_leaf': [2, 3, 5, 10],
    'min_impurity_decrease': [0.1, 0.2, 0.5]
}
grid = GridSearchCV(DecisionTreeClassifier(), param_grid=param, cv=6)
grid.fit(x_train, y_train)
print('最优分类器:', grid.best_params_, '最优分数:', grid.best_score_)  # 得到最优的参数和分值
代码语言:javascript复制
输出最优分类器的参数: {'criterion': 'gini', 'max_depth': 30, 'min_impurity_decrease': 0.2, 'min_samples_leaf': 3} 准确率: 0.9416666666666665

参考

  • https://github.com/fengdu78/lihang-code
  • 《统计学习方法》,清华大学出版社,李航著,2019年出版
  • https://scikit-learn.org

0 人点赞