"
The thing that is really hard, and really amazing, is giving up on
being perfect and beginning the work of becoming yourself.
---- Anna Quindlen
"
本文编辑:小闫同学 本文由 James Le 所写的英文文章 The 10 Algorithms Machine Learning Engineers Need to Know 翻译而来 译者:尚剑
It is no doubt that the sub-field of machine learning / artificial intelligence has increasingly gained more popularity in the past couple of years. As Big Data is the hottest trend in the tech industry at the moment, machine learning is incredibly powerful to make predictions or calculated suggestions based on large amounts of data. Some of the most common examples of machine learning are Netflix’s algorithms to make movie suggestions based on movies you have watched in the past or Amazon’s algorithms that recommend books based on books you have bought before.
毫无疑问,机器学习 / 人工智能的子领域在过去几年越来越受欢迎。目前大数据在科技行业已经炙手可热,而基于大量数据来进行预测或者得出建议的机器学习无疑是非常强大的。一些最常见的机器学习例子,比如 Netflix 的算法可以根据你以前看过的电影来进行电影推荐,而 Amazon 的算法则可以根据你以前买过的书来推荐书籍。
So if you want to learn more about machine learning, how do you start?
所以如果你想了解更多有关机器学习的内容,那么你该如何入门?
For me, my first introduction is when I took an Artificial Intelligence class when I was studying abroad in Copenhagen. My lecturer is a full-time Applied Math and CS professor at the Technical University of Denmark, in which his research areas are logic and artificial, focusing primarily on the use of logic to model human-like planning, reasoning and problem solving. The class was a mix of discussion of theory/core concepts and hands-on problem solving. The textbook that we used is one of the AI classics: Peter Norvig’s Artificial Intelligence — A Modern Approach[1], in which we covered major topics including intelligent agents, problem-solving by searching, adversarial search, probability theory, multi-agent systems, social AI, philosophy/ethics/future of AI. At the end of the class, in a team of 3, we implemented simple search-based agents solving transportation tasks in a virtual environment as a programming project.
对于我来说,我的入门课程是我在哥本哈根出国留学时参加的人工智能课。当时我的讲师是丹麦技术大学(Technical University of Denmark)的应用数学和计算机科学的全职教授,他的研究方向是逻辑与人工智能,侧重于使用逻辑学来对人性化的规划、推理和解决问题进行建模。这个课程包括对理论 / 核心概念的讨论和自己动手解决问题。我们使用的教材是 AI 经典之一:Peter Norvig 的 Artificial Intelligence—A Modern Approach(中文译本:《人工智能:一种现代的方法》[2]),这本书主要讲了智能体、搜索解决问题、对抗搜索、概率论、多智能体系统、社会AI 和AI 的哲学/ 伦理/ 未来等等。在课程结束时,我们三个人的团队实现了一个简单的编程项目,也就是基于搜索的智能体解决虚拟环境中的运输任务问题。
I have learned a tremendous amount of knowledge thanks to that class, and decided to keep learning about this specialized topic. In the last few weeks, I have been multiple tech talks in San Francisco on deep learning, neural networks, data architecture — and a Machine Learning conference with a lot of well-known professionals in the field. Most importantly, I enrolled in Udacity’s Intro to Machine Learning[3] online course in the beginning of June and has just finished it a few days ago. In this post, I want to share some of the most common machine learning algorithms that I learned from the course.
在那门课程上我已经学到了很多知识,并决定继续学习相关的课题。在过去的几个星期里,我在旧金山参加了多次相关的技术讲座,涉及到深度学习、神经网络和数据结构,并且参加了一个有很多该领域的知名专家学者参加的机器学习会议。最重要的是,我在 6 月初参加了 Udacity 上的 Intro to Machine Learning(机器学习入门)[4]在线课程,前几天才完成。在这篇文章中,我想分享一下我从课程中学到的一些最常用的机器学习算法。
Machine learning algorithms can be divided into 3 broad categories — supervised learning, unsupervised learning, and reinforcement learning.Supervised learning is useful in cases where a property (label) is available for a certain dataset (training set), but is missing and needs to be predicted for other instances. Unsupervised learning is useful in cases where the challenge is to discover implicit relationships in a given unlabeled dataset (items are not pre-assigned). Reinforcement learning falls between these 2 extremes — there is some form of feedback available for each predictive step or action, but no precise label or error message. Since this is an intro class, I didn’t learn about reinforcement learning, but I hope that 10 algorithms on supervised and unsupervised learning will be enough to keep you interested.
机器学习算法可以分为三大类:监督学习、无监督学习和强化学习。监督学习可用于一个特定的数据集(训练集)具有某一属性(标签),但是其他数据没有标签或者需要预测标签的情况。无监督学习可用于给定的没有标签的数据集(数据不是预分配好的),目的就是要找出数据间的潜在关系。强化学习位于这两者之间,每次预测都有一定形式的反馈,但是没有精确的标签或者错误信息。因为这是一个介绍课程,我没有学习过强化学习的相关内容,但是我希望以下 10 个关于监督学习和无监督学习的算法足以让你感兴趣。
监督学习
1. 决策树(Decision Trees)
A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance-event outcomes, resource costs, and utility. Take a look at the image to get a sense of how it looks like.
决策树是一个决策支持工具,它使用树形图或者决策模型以及可能性序列,包括偶然事件的结果、资源成本和效用。下图是其基本原理:
From a business decision point of view, a decision tree is the minimum number of yes/no questions that one has to ask, to assess the probability of making a correct decision, most of the time. As a method, it allows you to approach the problem in a structured and systematic way to arrive at a logical conclusion.
从业务决策的角度来看,决策树是人们必须了解的最少的是 / 否问题,这样才能评估大多数时候做出正确决策的概率。作为一种方法,它允许你以结构化和系统化的方式来解决问题,从而得出合乎逻辑的结论。
2. 朴素贝叶斯分类 (Naive Bayesian classification)
Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes’ theorem with strong (naive) independence assumptions between the features. The featured image is the equation — with P(A|B) is posterior probability, P(B|A) is likelihood, P(A) is class prior probability, and P(B) is predictor prior probability.
朴素贝叶斯分类器是一类简单的概率分类器,它基于贝叶斯定理和特征间的强大的(朴素的)独立假设。图中是贝叶斯公式,其中 P(A|B)是后验概率,P(B|A)是似然,P(A)是类先验概率,P(B)是预测先验概率。
Some of real world examples are:
•To mark an email as spam or not spam•Classify a news article about technology, politics, or sports•Check a piece of text expressing positive emotions, or negative emotions?•Used for face recognition software.
一些应用例子:
•判断垃圾邮件•对新闻的类别进行分类,比如科技、政治、运动•判断文本表达的感情是积极的还是消极的•人脸识别
3. 最小二乘法(Ordinary Least Squares Regression)
If you know statistics, you probably have heard of linear regression before. Least squares is a method for performing linear regression. You can think of linear regression as the task of fitting a straight line through a set of points. There are multiple possible strategies to do this, and “ordinary least squares” strategy go like this — You can draw a line, and then for each of the data points, measure the vertical distance between the point and the line, and add these up; the fitted line would be the one where this sum of distances is as small as possible.
如果你懂统计学的话,你可能以前听说过线性回归。最小二乘法是一种计算线性回归的方法。你可以将线性回归看做通过一组点来拟合一条直线。实现这个有很多种方法,“最小二乘法”就像这样:你可以画一条直线,然后对于每一个数据点,计算每个点到直线的垂直距离,然后把它们加起来,那么最后得到的拟合直线就是距离和尽可能小的直线。
Linear refers the kind of model you are using to fit the data, while least squares refers to the kind of error metric you are minimizing over.
线性指的是你用来拟合数据的模型,而最小二乘法指的是你最小化的误差度量。
4. 逻辑回归 (Logistic Regression)
Logistic regression is a powerful statistical way of modeling a binomial outcome with one or more explanatory variables. It measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative logistic distribution.
逻辑回归是一个强大的统计学方法,它可以用一个或多个解释变量来表示一个二项式结果。它通过使用逻辑函数来估计概率,从而衡量类别依赖变量和一个或多个独立变量之间的关系,后者服从累计逻辑分布。
In general, regressions can be used in real-world applications such as:
•Credit Scoring•Measuring the success rates of marketing campaigns•Predicting the revenues of a certain product•Is there going to be an earthquake on a particular day?
总的来说,逻辑回归可以用于以下几个真实应用场景:
•信用评分•计算营销活动的成功率•预测某个产品的收入•特定的某一天是否会发生地震
5. 支持向量机(Support Vector Machine,SVM)
SVM is binary classification algorithm. Given a set of points of 2 types in N dimensional place, SVM generates a (N — 1) dimensional hyperplane to separate those points into 2 groups. Say you have some points of 2 types in a paper which are linearly separable. SVM will find a straight line which separates those points into 2 types and situated as far as possible from all those points.
SVM 是二进制分类算法。给定 N 维坐标下两种类型的点,SVM 生成(N-1)维的超平面来将这些点分成两组。假设你在平面上有两种类型的可以线性分离的点,SVM 将找到一条直线,将这些点分成两种类型,并且这条直线尽可能远离所有这些点。
In terms of scale, some of the biggest problems that have been solved using SVMs (with suitably modified implementations) are display advertising, human splice site recognition, image-based gender detection, large-scale image classification...
从规模上看,使用 SVM(经过适当的修改)解决的一些最大的问题包括显示广告、人类剪切位点识别(human splice site recognition)、基于图像的性别检测,大规模图像分类……
6. 集成方法(Ensemble methods)
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a weighted vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, bagging, and boosting.
集成方法是学习算法,它通过构建一组分类器,然后通过它们的预测结果进行加权投票来对新的数据点进行分类。原始的集成方法是贝叶斯平均,但是最近的算法包括纠错输出编码、Bagging 和 Boosting。
So how do ensemble methods work and why are they superior to individual models?
•They average out biases: If you average a bunch of democratic-leaning polls and republican-leaning polls together, you will get an average something that isn’t leaning either way.•They reduce the variance: The aggregate opinion of a bunch of models is less noisy than the single opinion of one of the models. In finance, this is called diversification — a mixed portfolio of many stocks will be much less variable than just one of the stocks alone. This is why your models will be better with more data points rather than fewer.•They are unlikely to over-fit: If you have individual models that didn’t over-fit, and you are combining the predictions from each model in a simple way (average, weighted average, logistic regression), then there’s no room for over-fitting.
那么集成方法如何工作?并且为什么它们要优于单个模型?
•它们平均了单个模型的偏差:如果你将民主党的民意调查和共和党的民意调查在一起平均化,那么你将得到一个均衡的结果,不偏向任何一方。•它们减少了方差:一组模型的总体意见比其中任何一个模型的单一意见更加统一。在金融领域,这就是所谓的多元化,有许多股票的组合比一个单独的股票的不确定性更少,这也为什么你的模型在数据多的情况下会更好的原因。•它们不太可能过拟合:如果你有单个的模型没有过拟合,那么把这些模型的预测简单结合起来(平均、加权平均、逻辑回归),那么最后得到的模型也不会过拟合。
无监督学习
7. 聚类算法(Clustering Algorithms)
Clustering is the task of grouping a set of objects such that objects in the same group (cluster) are more similar to each other than to those in other groups.
聚类是将一系列对象分组的任务,目标是使相同组(集群)中的对象之间比其他组的对象更相似。
Every clustering algorithm is different, and here are a couple of them:
•Centroid-based algorithms•Connectivity-based algorithms•Density-based algorithms•Probabilistic•Dimensionality Reduction•Neural networks / Deep Learning
每一种聚类算法都不相同,下面是一些例子:
•基于质心的算法•基于连接的算法•基于密度的算法•概率•降维•神经网络 / 深度学习
8. 主成分分析(Principal Component Analysis,PCA)
PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.
PCA 是一个统计学过程,它通过使用正交变换将一组可能存在相关性的变量的观测值转换为一组线性不相关的变量的值,转换后的变量就是所谓的主分量。
Some of the applications of PCA include compression, simplifying data for easier learning, visualization. Notice that domain knowledge is very important while choosing whether to go forward with PCA or not. It is not suitable in cases where data is noisy (all the components of PCA have quite a high variance).
PCA 的一些应用包括压缩、简化数据便于学习、可视化等。请注意,领域知识在选择是否继续使用 PCA 时非常重要。数据嘈杂的情况(PCA 的所有成分具有很高的方差)并不适用。
9. 奇异值分解(Singular Value Decomposition,SVD)
In linear algebra, SVD is a factorization of a real complex matrix. For a given m * n matrix M, there exists a decomposition such that M = UΣV, where U and V are unitary matrices and Σ is a diagonal matrix.
在线性代数中,SVD 是复杂矩阵的因式分解。对于给定的 m * n 矩阵 M,存在分解使得 M=UΣV,其中 U 和 V 是酉矩阵,Σ是对角矩阵。
PCA is actually a simple application of SVD. In computer vision, the 1st face recognition algorithms used PCA and SVD in order to represent faces as a linear combination of “eigenfaces”, do dimensionality reduction, and then match faces to identities via simple methods; although modern methods are much more sophisticated, many still depend on similar techniques.
实际上,PCA 是 SVD 的一个简单应用。在计算机视觉中,第一个人脸识别算法使用 PCA 和 SVD 来将面部表示为“特征面”的线性组合,进行降维,然后通过简单的方法将面部匹配到身份,虽然现代方法更复杂,但很多方面仍然依赖于类似的技术。
10. 独立成分分析(Independent Component Analysis,ICA)
ICA is a statistical technique for revealing hidden factors that underlie sets of random variables, measurements, or signals. ICA defines a generative model for the observed multivariate data, which is typically given as a large database of samples. In the model, the data variables are assumed to be linear mixtures of some unknown latent variables, and the mixing system is also unknown. The latent variables are assumed non-gaussian and mutually independent, and they are called independent components of the observed data.
ICA 是一种统计技术,主要用于揭示随机变量、测量值或信号集中的隐藏因素。ICA 对观测到的多变量数据定义了一个生成模型,这通常是作为样本的一个大的数据库。在模型中,假设数据变量由一些未知的潜在变量线性混合,混合方式也是未知的。潜在变量被假定为非高斯分布并且相互独立,它们被称为观测数据的独立分量。
ICA is related to PCA, but it is a much more powerful technique that is capable of finding the underlying factors of sources when these classic methods fail completely. Its applications include digital images, document databases, economic indicators and psychometric measurements.
ICA 与 PCA 有关,但是当这些经典方法完全失效时,它是一种更强大的技术,能够找出源的潜在因素。其应用包括数字图像、文档数据库、经济指标和心理测量。
Now go forth and wield your understanding of algorithms to create machine learning applications that make better experiences for people everywhere.
现在运用你对这些算法的理解去创造机器学习应用,为世界各地的人们带来更好的体验吧。
中文地址:https://www.infoq.cn/article/10-algorithms-machine-learning-engineers-need-to-know/ 查看英文原文: The 10 Algorithms Machine Learning Engineers Need to Know[5]
References
[1]
Peter Norvig’s Artificial Intelligence — A Modern Approach: https://www.amazon.com/Artificial-Intelligence-Modern-Approach-3rd/dp/0136042597
[2]
《人工智能:一种现代的方法》: https://book.douban.com/subject/6730363/
[3]
Intro to Machine Learning: https://www.udacity.com/course/intro-to-machine-learning--ud120
[4]
Intro to Machine Learning(机器学习入门): https://cn.udacity.com/course/intro-to-machine-learning--ud120
[5]
The 10 Algorithms Machine Learning Engineers Need to Know: http://www.kdnuggets.com/2016/08/10-algorithms-machine-learning-engineers.html/2