面试 | 你真的了解count(*)和count(1)嘛?

2022-04-18 16:09:55 浏览数 (1)

count(*)与count(1)非常常用,可是你真的了解它嘛?

本文基于spark 3.2

先给结论,在spark sql中count(*)不管在运行效率方面,还是在最终展示结果方面 都等同于count(1)。

虽然是我在给大家出作业题,这篇也算是我自己给自己交的一个作业吧

本文大纲

1、案例展示

2、为啥count(*)==count(1) 源码分析

3、spark sql count函数详解(count函数的代码虽然简单,但是开发该代码的同学还是做了一些优化在里面)

案例展示

sql:

代码语言:javascript复制
SELECT A,COUNT(*) AS CNT FROM TESTDATA2 GROUP BY A

从上面打印可以看到,count(*)在生成未解析的逻辑执行计划时就被转换成了count(1)。

也就是说,在spark sql中count(*)==count(1)。

源码分析

从sql生成的AstTree上来看,处理函数的节点是functioncall:

在AstBuilder类中找到visitFunctionCall方法:

spark sql count函数详解

count函数的代码虽然简单,但是开发该代码的同学还是做了一些优化在里面。

count函数功能:

1、count(*) 返回检索到的行的总数,包括包含null的行。

2、count(expr[, expr...]) 返回提供的表达式均为非空的行数。

3、count(DISTINCT expr[, expr...]) 返回提供的表达式唯一且非空的行数。

一个点:

判断expression是不是为null,先从expression的nullable的属性里拿,如果nullable的属性不能判断了,再用isnull来判断具体的值是不是为null。

这样在一定程度上能节省计算资源。比如count(1)这样的常量,1一定是不为null的,属性里就可以确定了,不用再走一篇eval取值。

代码语言:javascript复制

package org.apache.spark.sql.catalyst.expressions.aggregate

import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
import org.apache.spark.sql.catalyst.dsl.expressions._
import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.trees.TreePattern.{COUNT, TreePattern}
import org.apache.spark.sql.internal.SQLConf
import org.apache.spark.sql.types._

// scalastyle:off line.size.limit
@ExpressionDescription(
  usage = """
    _FUNC_(*) - Returns the total number of retrieved rows, including rows containing null.

    _FUNC_(expr[, expr...]) - Returns the number of rows for which the supplied expression(s) are all non-null.

    _FUNC_(DISTINCT expr[, expr...]) - Returns the number of rows for which the supplied expression(s) are unique and non-null.
  """,
  examples = """
    Examples:
      > SELECT _FUNC_(*) FROM VALUES (NULL), (5), (5), (20) AS tab(col);
       4
      > SELECT _FUNC_(col) FROM VALUES (NULL), (5), (5), (20) AS tab(col);
       3
      > SELECT _FUNC_(DISTINCT col) FROM VALUES (NULL), (5), (5), (10) AS tab(col);
       2
  """,
  group = "agg_funcs",
  since = "1.0.0")
// scalastyle:on line.size.limit
case class Count(children: Seq[Expression]) extends DeclarativeAggregate {

  override def nullable: Boolean = false

  final override val nodePatterns: Seq[TreePattern] = Seq(COUNT)

  // Return data type.
  override def dataType: DataType = LongType

  override def checkInputDataTypes(): TypeCheckResult = {
    if (children.isEmpty && !SQLConf.get.getConf(SQLConf.ALLOW_PARAMETERLESS_COUNT)) {
      TypeCheckResult.TypeCheckFailure(s"$prettyName requires at least one argument. "  
        s"If you have to call the function $prettyName without arguments, set the legacy "  
        s"configuration `${SQLConf.ALLOW_PARAMETERLESS_COUNT.key}` as true")
    } else {
      TypeCheckResult.TypeCheckSuccess
    }
  }

  protected lazy val count = AttributeReference("count", LongType, nullable = false)()

  override lazy val aggBufferAttributes = count :: Nil

  override lazy val initialValues = Seq(
    /* count = */ Literal(0L)
  )

  override lazy val mergeExpressions = Seq(
    /* count = */ count.left   count.right
  )

  override lazy val evaluateExpression = count

  override def defaultResult: Option[Literal] = Option(Literal(0L))

  override lazy val updateExpressions = {
    //去掉为空的expression(这一层的判断是走expression里的nullable的属性)
    val nullableChildren = children.filter(_.nullable)
    if (nullableChildren.isEmpty) {
      Seq(
        /* count = */ count   1L
      )
    } else {
    // expression的nullable的属性只能依赖拿到具体值判断时,用isnull方法来判断是不是空值
      Seq(
        /* count = */ If(nullableChildren.map(IsNull).reduce(Or), count, count   1L)
      )
    }
  }

  override protected def withNewChildrenInternal(newChildren: IndexedSeq[Expression]): Count =
    copy(children = newChildren)
}

object Count {
  def apply(child: Expression): Count = Count(child :: Nil)
}

0 人点赞