OushuDB 查看查询执行情况

2023-05-08 11:10:10 浏览数 (1)

使用timing命令可以打印出查询执行的时间。

代码语言:javascript复制
 test=# timing on
 Timing is on.

 test=# select * from foo; # 这时再执行SQL语句会给出语句执行时间。
  id | name
 ---- ------
   1 | hawq
   2 | hdfs
 (2 rows)

 Time: 16.369 ms

 test=# timing off  # 关闭时间输出
 Timing is off.

 使用explain语句可以显示出查询计划。
 test=# explain select count(*) from foo;
                                     QUERY PLAN
----------------------------------------------------------
  Aggregate  (cost=1.07..1.08 rows=1 width=8)
    ->  Gather Motion 1:1  (slice1; segments: 1)  (cost=1.03..1.06 rows=1 width=8)
      ->  Aggregate  (cost=1.03..1.04 rows=1 width=8)
            ->  Append-only Scan on foo  (cost=0.00..1.02 rows=2 width=0)
  Settings:  default_hash_table_bucket_number=6
 (5 rows)

使用explain analyze可以显示出查询在具体执行时的状态,包括每一个操作符开始执行时间,以及结束时间,可以帮助用户找到查询的瓶颈,进而优化查询。关于查询计划以及explain analyze的执行结果的解释可以参考查询计划与查询执行章节。针对一个查询,可能会有无数个查询计划。得出优化的查询计划是查询优化器的功能。一个查询执行时间的长短与查询的计划有很大关系,所以熟悉查询计划以及具体查询的执行对查询优化有很大意义。

代码语言:javascript复制
 test=# explain analyze select count(*) from foo;
 -----------------------------------------------------------
Aggregate  (cost=1.07..1.08 rows=1 width=8)
Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(seg-1:changlei/seg-1:changlei) 1/1 rows with 5.944/5.944 ms to end, start offset by 6.568/6.568 ms.
->  Gather Motion 1:1  (slice1; segments: 1)  (cost=1.03..1.06 rows=1 width=8)
      Rows out:  Avg 1.0 rows x 1 workers at destination.  Max/Last(seg-1:changlei/seg-1:changlei) 1/1 rows with 5.941/5.941 ms to first row, 5.942/5.942 ms to end, start offset by 6.569/6.569 ms.
      ->  Aggregate  (cost=1.03..1.04 rows=1 width=8)
            Rows out:  Avg 1.0 rows x 1 workers.  Max/Last(seg0:changlei/seg0:changlei) 1/1 rows with 5.035/5.035 ms to first row, 5.036/5.036 ms to end, start offset by 7.396/7.396 ms.
            ->  Append-only Scan on foo  (cost=0.00..1.02 rows=2 width=0)
                  Rows out:  Avg 2.0 rows x 1 workers.  Max/Last(seg0:changlei/seg0:changlei) 2/2 rows with 5.011/5.011 ms to first row, 5.032/5.032 ms to end, start offset by 7.397/7.397 ms.
Slice statistics:
  (slice0)    Executor memory: 223K bytes.
  (slice1)    Executor memory: 279K bytes (seg0:changlei).
Statement statistics:
  Memory used: 262144K bytes
Settings:  default_hash_table_bucket_number=6
Dispatcher statistics:
  executors used(total/cached/new connection): (1/1/0); dispatcher time(total/connection/dispatch data): (1.462 ms/0.000 ms/0.029 ms).
  dispatch data time(max/min/avg): (0.029 ms/0.029 ms/0.029 ms); consume executor data time(max/min/avg): (0.012 ms/0.012 ms/0.012 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
Data locality statistics:
  data locality ratio: 1.000; virtual segment number: 1; different host number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment size(avg/min/max): (56.000 B/56 B/56 B); segment size with penalty(avg/min/max): (56.000 B/56 B/56 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 0.049 ms; resource allocation: 0.612 ms; datalocality calculation: 0.085 ms.
Total runtime: 13.398 ms
(20 rows)

0 人点赞