文章大纲
- spark 中的归一化
- MaxAbsScaler
- MinMaxScaler
- 参考文献
spark 中的归一化
MaxAbsScaler
- http://spark.apache.org/docs/latest/api/scala/org/apache/spark/ml/feature/MaxAbsScaler.html
MinMaxScaler
- http://spark.apache.org/docs/latest/api/scala/org/apache/spark/ml/feature/MinMaxScaler.html
Rescale each feature individually to a common range min, max linearly using column summary
statistics, which is also known as min-max normalization or Rescaling. The rescaled value for
feature E is calculated as:
For the case (E_{max} == E_{min}), (Rescaled(e_i) = 0.5 * (max min)).
note :
Since zero values will probably be transformed to non-zero values, output of the transformer will be DenseVector even for sparse input.
核心代码:主要就是计算 最大最小值
代码语言:javascript复制override def fit(dataset: Dataset[_]): MinMaxScalerModel = {
transformSchema(dataset.schema, logging = true)
val Row(max: Vector, min: Vector) = dataset
.select(Summarizer.metrics("max", "min").summary(col($(inputCol))).as("summary"))
.select("summary.max", "summary.min")
.first()
copyValues(new MinMaxScalerModel(uid, min.compressed, max.compressed).setParent(this))
}
注意: 上面的计算方式, 我们发现只能支持Vector的形式,那么对于但一值的情况如何转换呢?
代码语言:javascript复制 val temp_mean = df_num.select(functions.mean(df_num.col("features"))).collect()(0)
println(temp_mean.getDouble(0))
val Row(mean2: Vector) =Row(Vectors.dense(temp_mean.getDouble(0)))
val df_num = spark.createDataFrame(Seq(
(0, 0.5, -1.0),
(1, 1.0, 1.0),
(2, 10.0, 2.0),
(3, 10.0, 0.0)
)).toDF("id", "features","result")
df.show()
参考文献
系列文章:
- 正则化、标准化、归一化基本概念简介
- spark 中的正则化
- spark 中的标准化
- spark 中的归一化
- 扩展spark 的归一化函数
spark 中的 特征相关内容处理的文档
- http://spark.apache.org/docs/latest/api/scala/org/apache/spark/ml/feature/index.html
概念简介
- https://blog.csdn.net/u014381464/article/details/81101551
参考:
- https://segmentfault.com/a/1190000014042959
- https://www.cnblogs.com/nucdy/p/7994542.html
- https://blog.csdn.net/weixin_34117522/article/details/88875270
- https://blog.csdn.net/xuejianbest/article/details/85779029