tikv和tidb_tidb优缺点

2022-09-29 11:41:30 浏览数 (2)

大家好,又见面了,我是你们的朋友全栈君。

概述:

TiKV 最底层使用的是 RocksDB 做为持久化存储,所以 TiKV 的很多性能相关的参数都是与 RocksDB 相关的。TiKV 使用了两个 RocksDB 实例,默认 RocksDB 实例存储 KV 数据,Raft RocksDB 实例(简称 RaftDB)存储 Raft 数据。

TiKV 使用了 RocksDB 的 Column Families 特性。

  • 默认 RocksDB 实例将 KV 数据存储在内部的 defaultwritelock 3 个 CF 内。
    • default CF 存储的是真正的数据,与其对应的参数位于 [rocksdb.defaultcf] 项中;
    • write CF 存储的是数据的版本信息(MVCC)以及索引相关的数据,相关的参数位于 [rocksdb.writecf] 项中;
    • lock CF 存储的是锁信息,系统使用默认参数。
  • Raft RocksDB 实例存储 Raft log。
    • default CF 主要存储的是 Raft log,与其对应的参数位于 [raftdb.defaultcf] 项中。

每个 CF 都有单独的 block-cache,用于缓存数据块,加速 RocksDB 的读取速度,block-cache 的大小通过参数 block-cache-size 控制,block-cache-size 越大,能够缓存的热点数据越多,对读取操作越有利,同时占用的系统内存也会越多。

每个 CF 有各自的 write-buffer,大小通过 write-buffer-size 控制。

代码语言:javascript复制
tikv的版本:
# ./tikv-server --version
TiKV 
Release Version:   2.0.6
Git Commit Hash:   57c83dc4ebc93d38d77dc8f7d66db224760766cc
Git Commit Branch: release-2.0
UTC Build Time:    2018-08-03 11:28:38
Rust Version:      1.27.0-nightly (48fa6f963 2018-04-05)

tikv的分组:
#grep '^[[]' last_tikv.toml 
[readpool.storage]
[readpool.coprocessor]
[server]
[server.labels]
[storage]
[pd]
[metric]
[raftstore]
[coprocessor]
[rocksdb]
[rocksdb.defaultcf]
[rocksdb.writecf]
[rocksdb.lockcf]
[rocksdb.raftcf]
[raftdb]
[raftdb.defaultcf]
[security]
[import]
注释:
查看文件 不取含[]的行:
#grep '^[^[]' last_tikv.toml

在tikv2.0版本默认的参数多达300 个参数:

代码语言:javascript复制
log-level = "info"
log-file = "/data/deploy/log/tikv.log"
[readpool.storage]
high-concurrency = 4
normal-concurrency = 4
low-concurrency = 4
max-tasks-high = 8000
max-tasks-normal = 8000
max-tasks-low = 8000
stack-size = "10MB"
[readpool.coprocessor]
high-concurrency = 38
normal-concurrency = 38
low-concurrency = 38
max-tasks-high = 76000
max-tasks-normal = 76000
max-tasks-low = 76000
stack-size = "10MB"
[server]
addr = "0.0.0.0:20160"
advertise-addr = "10.19.75.102:20160"
notify-capacity = 40960
messages-per-tick = 4096
grpc-compression-type = "none"
grpc-concurrency = 4
grpc-concurrent-stream = 1024
grpc-raft-conn-num = 10
grpc-stream-initial-window-size = "2MB"
grpc-keepalive-time = "10s"
grpc-keepalive-timeout = "3s"
concurrent-send-snap-limit = 32
concurrent-recv-snap-limit = 32
end-point-max-tasks = 2000
end-point-recursion-limit = 1000
end-point-stream-channel-size = 8
end-point-batch-row-limit = 64
end-point-stream-batch-row-limit = 128
end-point-request-max-handle-duration = "1m"
snap-max-write-bytes-per-sec = "100MB"
snap-max-total-size = "0KB"
[server.labels]
[storage]
data-dir = "/data/deploy/data"
gc-ratio-threshold = 1.1
max-key-size = 4096
scheduler-notify-capacity = 10240
scheduler-messages-per-tick = 1024
scheduler-concurrency = 2048000
scheduler-worker-pool-size = 8
scheduler-pending-write-threshold = "100MB"
[pd]
endpoints = ["10.19.85.149:2379", "10.19.15.103:2379", "10.19.189.221:2379"]
[metric]
interval = "15s"
address = "10.19.85.149:9091"
job = "tikv"
[raftstore]
sync-log = true
raftdb-path = ""
capacity = "0KB"
raft-base-tick-interval = "1s"
raft-heartbeat-ticks = 2
raft-election-timeout-ticks = 10
raft-min-election-timeout-ticks = 0
raft-max-election-timeout-ticks = 0
raft-max-size-per-msg = "1MB"
raft-max-inflight-msgs = 256
raft-entry-max-size = "8MB"
raft-log-gc-tick-interval = "10s"
raft-log-gc-threshold = 50
raft-log-gc-count-limit = 73728
raft-log-gc-size-limit = "72MB"
split-region-check-tick-interval = "10s"
region-split-check-diff = "6MB"
region-compact-check-interval = "5m"
clean-stale-peer-delay = "10m"
region-compact-check-step = 100
region-compact-min-tombstones = 10000
pd-heartbeat-tick-interval = "1m"
pd-store-heartbeat-tick-interval = "10s"
snap-mgr-gc-tick-interval = "1m"
snap-gc-timeout = "4h"
lock-cf-compact-interval = "10m"
lock-cf-compact-bytes-threshold = "256MB"
notify-capacity = 40960
messages-per-tick = 4096
max-peer-down-duration = "5m"
max-leader-missing-duration = "2h"
abnormal-leader-missing-duration = "10m"
peer-stale-state-check-interval = "5m"
snap-apply-batch-size = "10MB"
consistency-check-interval = "0s"
report-region-flow-interval = "1m"
raft-store-max-leader-lease = "9s"
right-derive-when-split = true
allow-remove-leader = false
merge-max-log-gap = 10
merge-check-tick-interval = "10s"
use-delete-range = false
cleanup-import-sst-interval = "10m"
[coprocessor]
split-region-on-table = true
region-max-size = "144MB"
region-split-size = "96MB"
[rocksdb]
wal-recovery-mode = 2
wal-dir = ""
wal-ttl-seconds = 0
wal-size-limit = "0KB"
max-total-wal-size = "4GB"
max-background-jobs = 6
max-manifest-file-size = "20MB"
create-if-missing = true
max-open-files = 40960
enable-statistics = true
stats-dump-period = "10m"
compaction-readahead-size = "0KB"
info-log-max-size = "1GB"
info-log-roll-time = "0s"
info-log-keep-log-file-num = 10
info-log-dir = ""
rate-bytes-per-sec = "0KB"
bytes-per-sync = "1MB"
wal-bytes-per-sync = "512KB"
max-sub-compactions = 1
writable-file-max-buffer-size = "1MB"
use-direct-io-for-flush-and-compaction = false
enable-pipelined-write = true
[rocksdb.defaultcf]
block-size = "64KB"
block-cache-size = "48331MB"
disable-block-cache = false
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
use-bloom-filter = true
whole-key-filtering = true
bloom-filter-bits-per-key = 10
block-based-bloom-filter = false
read-amp-bytes-per-bit = 0
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "512MB"
target-file-size-base = "8MB"
level0-file-num-compaction-trigger = 4
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
max-compaction-bytes = "2GB"
compaction-pri = 3
dynamic-level-bytes = false
num-levels = 7
max-bytes-for-level-multiplier = 10
compaction-style = 0
disable-auto-compactions = false
soft-pending-compaction-bytes-limit = "64GB"
hard-pending-compaction-bytes-limit = "256GB"
[rocksdb.writecf]
block-size = "64KB"
block-cache-size = "28998MB"
disable-block-cache = false
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
use-bloom-filter = true
whole-key-filtering = false
bloom-filter-bits-per-key = 10
block-based-bloom-filter = false
read-amp-bytes-per-bit = 0
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "512MB"
target-file-size-base = "8MB"
level0-file-num-compaction-trigger = 4
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
max-compaction-bytes = "2GB"
compaction-pri = 3
dynamic-level-bytes = false
num-levels = 7
max-bytes-for-level-multiplier = 10
compaction-style = 0
disable-auto-compactions = false
soft-pending-compaction-bytes-limit = "64GB"
hard-pending-compaction-bytes-limit = "256GB"
[rocksdb.lockcf]
block-size = "16KB"
block-cache-size = "1GB"
disable-block-cache = false
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
use-bloom-filter = true
whole-key-filtering = true
bloom-filter-bits-per-key = 10
block-based-bloom-filter = false
read-amp-bytes-per-bit = 0
compression-per-level = ["no", "no", "no", "no", "no", "no", "no"]
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "128MB"
target-file-size-base = "8MB"
level0-file-num-compaction-trigger = 1
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
max-compaction-bytes = "2GB"
compaction-pri = 0
dynamic-level-bytes = false
num-levels = 7
max-bytes-for-level-multiplier = 10
compaction-style = 0
disable-auto-compactions = false
soft-pending-compaction-bytes-limit = "64GB"
hard-pending-compaction-bytes-limit = "256GB"
[rocksdb.raftcf]
block-size = "16KB"
block-cache-size = "128MB"
disable-block-cache = false
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
use-bloom-filter = true
whole-key-filtering = true
bloom-filter-bits-per-key = 10
block-based-bloom-filter = false
read-amp-bytes-per-bit = 0
compression-per-level = ["no", "no", "no", "no", "no", "no", "no"]
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "128MB"
target-file-size-base = "8MB"
level0-file-num-compaction-trigger = 1
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
max-compaction-bytes = "2GB"
compaction-pri = 0
dynamic-level-bytes = false
num-levels = 7
max-bytes-for-level-multiplier = 10
compaction-style = 0
disable-auto-compactions = false
soft-pending-compaction-bytes-limit = "64GB"
hard-pending-compaction-bytes-limit = "256GB"
[raftdb]
wal-recovery-mode = 2
wal-dir = ""
wal-ttl-seconds = 0
wal-size-limit = "0KB"
max-total-wal-size = "4GB"
max-manifest-file-size = "20MB"
create-if-missing = true
max-open-files = 40960
enable-statistics = true
stats-dump-period = "10m"
compaction-readahead-size = "0KB"
info-log-max-size = "1GB"
info-log-roll-time = "0s"
info-log-keep-log-file-num = 10
info-log-dir = ""
max-sub-compactions = 1
writable-file-max-buffer-size = "1MB"
use-direct-io-for-flush-and-compaction = false
enable-pipelined-write = true
allow-concurrent-memtable-write = false
bytes-per-sync = "1MB"
wal-bytes-per-sync = "512KB"
[raftdb.defaultcf]
block-size = "64KB"
block-cache-size = "2GB"
disable-block-cache = false
cache-index-and-filter-blocks = true
pin-l0-filter-and-index-blocks = true
use-bloom-filter = false
whole-key-filtering = true
bloom-filter-bits-per-key = 10
block-based-bloom-filter = false
read-amp-bytes-per-bit = 0
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
max-bytes-for-level-base = "512MB"
target-file-size-base = "8MB"
level0-file-num-compaction-trigger = 4
level0-slowdown-writes-trigger = 20
level0-stop-writes-trigger = 36
max-compaction-bytes = "2GB"
compaction-pri = 0
dynamic-level-bytes = false
num-levels = 7
max-bytes-for-level-multiplier = 10
compaction-style = 0
disable-auto-compactions = false
soft-pending-compaction-bytes-limit = "64GB"
hard-pending-compaction-bytes-limit = "256GB"
[security]
ca-path = ""
cert-path = ""
key-path = ""
[import]
import-dir = "/tmp/tikv/import"
num-threads = 8
stream-channel-window = 128

参数比对说明:

TiKV中的默认参数很多,其实下列几组的参数名是一样的:

代码语言:javascript复制
[rocksdb.defaultcf]
[rocksdb.writecf]
[rocksdb.lockcf]
[rocksdb.raftcf]
[raftdb.defaultcf]

TiKV中的上述组的多数参数配置是一致的,参数有差异的如下:

代码语言:javascript复制
block-size 
block-cache-size
compression-per-level
max-bytes-for-level-base
level0-file-num-compaction-trigger
compaction-pri

其中[rocksdb]和[raftdb]组中的参数一致,[rocksdb] 多了如下参数:

代码语言:javascript复制
rate-bytes-per-sec = "0KB"

TiDB官方对TiKV参数的部分解释:(此部分参数仅供参考,可能不适用2.0版本)

代码语言:javascript复制
# 日志级别,可选值为:trace,debug,info,warn,error,off
log-level = "info"
[server]
# 监听地址
# addr = "127.0.0.1:20160"
# 建议使用默认值
# notify-capacity = 40960
# messages-per-tick = 4096
# gRPC 线程池大小
# grpc-concurrency = 4
# TiKV 每个实例之间的 gRPC 连接数
# grpc-raft-conn-num = 10
# TiDB 过来的大部分读请求都会发送到 TiKV 的 coprocessor 进行处理,该参数用于设置
# coprocessor 线程的个数,如果业务是读请求比较多,增加 coprocessor 的线程数,但应比系统的
# CPU 核数小。例如:TiKV 所在的机器有 32 core,在重读的场景下甚至可以将该参数设置为 30。在没有
# 设置该参数的情况下,TiKV 会自动将该值设置为 CPU 总核数乘以 0.8。
# end-point-concurrency = 8
# 可以给 TiKV 实例打标签,用于副本的调度
# labels = {zone = "cn-east-1", host = "118", disk = "ssd"}
[storage]
# 数据目录
# data-dir = "/tmp/tikv/store"
# 通常情况下使用默认值就可以了。在导数据的情况下建议将该参数设置为 1024000。
# scheduler-concurrency = 102400
# 该参数控制写入线程的个数,当写入操作比较频繁的时候,需要把该参数调大。使用 top -H -p tikv-pid
# 发现名称为 sched-worker-pool 的线程都特别忙,这个时候就需要将 scheduler-worker-pool-size
# 参数调大,增加写线程的个数。
# scheduler-worker-pool-size = 4
[pd]
# pd 的地址
# endpoints = ["127.0.0.1:2379","127.0.0.2:2379","127.0.0.3:2379"]
[metric]
# 将 metrics 推送给 Prometheus pushgateway 的时间间隔
interval = "15s"
# Prometheus pushgateway 的地址
address = ""
job = "tikv"
[raftstore]
# 默认为 true,表示强制将数据刷到磁盘上。如果是非金融安全级别的业务场景,建议设置成 false,
# 以便获得更高的性能。
sync-log = true
# Raft RocksDB 目录。默认值是 [storage.data-dir] 的 raft 子目录。
# 如果机器上有多块磁盘,可以将 Raft RocksDB 的数据放在不同的盘上,提高 TiKV 的性能。
# raftdb-dir = "/tmp/tikv/store/raft"
region-max-size = "384MB"
# region 分裂阈值
region-split-size = "256MB"
# 当 region 写入的数据量超过该阈值的时候,TiKV 会检查该 region 是否需要分裂。为了减少检查过程
# 中扫描数据的成本,数据过程中可以将该值设置为32MB,正常运行状态下使用默认值即可。
region-split-check-diff = "32MB"
[rocksdb]
# RocksDB 进行后台任务的最大线程数,后台任务包括 compaction 和 flush。具体 RocksDB 为什么需要进行 compaction,
# 请参考 RocksDB 的相关资料。在写流量比较大的时候(例如导数据),建议开启更多的线程,
# 但应小于 CPU 的核数。例如在导数据的时候,32 核 CPU 的机器,可以设置成 28。
# max-background-jobs = 8
# RocksDB 能够打开的最大文件句柄数。
# max-open-files = 40960
# RocksDB MANIFEST 文件的大小限制.
# 更详细的信息请参考:https://github.com/facebook/rocksdb/wiki/MANIFEST
max-manifest-file-size = "20MB"
# RocksDB write-ahead logs 目录。如果机器上有两块盘,可以将 RocksDB 的数据和 WAL 日志放在
# 不同的盘上,提高 TiKV 的性能。
# wal-dir = "/tmp/tikv/store"
# 下面两个参数用于怎样处理 RocksDB 归档 WAL。
# 更多详细信息请参考:https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database?
# wal-ttl-seconds = 0
# wal-size-limit = 0
# RocksDB WAL 日志的最大总大小,通常情况下使用默认值就可以了。
# max-total-wal-size = "4GB"
# 可以通过该参数打开或者关闭 RocksDB 的统计信息。
# enable-statistics = true
# 开启 RocksDB compaction 过程中的预读功能,如果使用的是机械磁盘,建议该值至少为2MB。
# compaction-readahead-size = "2MB"
[rocksdb.defaultcf]
# 数据块大小。RocksDB 是按照 block 为单元对数据进行压缩的,同时 block 也是缓存在 block-cache
# 中的最小单元(类似其他数据库的 page 概念)。
block-size = "64KB"
# RocksDB 每一层数据的压缩方式,可选的值为:no,snappy,zlib,bzip2,lz4,lz4hc,zstd。
# no:no:lz4:lz4:lz4:zstd:zstd 表示 level0 和 level1 不压缩,level2 到 level4 采用 lz4 压缩算法,
# level5 和 level6 采用 zstd 压缩算法,。
# no 表示没有压缩,lz4 是速度和压缩比较为中庸的压缩算法,zlib 的压缩比很高,对存储空间比较友
# 好,但是压缩速度比较慢,压缩的时候需要占用较多的 CPU 资源。不同的机器需要根据 CPU 以及 I/O 资
# 源情况来配置怎样的压缩方式。例如:如果采用的压缩方式为"no:no:lz4:lz4:lz4:zstd:zstd",在大量
# 写入数据的情况下(导数据),发现系统的 I/O 压力很大(使用 iostat 发现 %util 持续 100% 或者使
# 用 top 命令发现 iowait 特别多),而 CPU 的资源还比较充裕,这个时候可以考虑将 level0 和
# level1 开启压缩,用 CPU 资源换取 I/O 资源。如果采用的压缩方式
# 为"no:no:lz4:lz4:lz4:zstd:zstd",在大量写入数据的情况下,发现系统的 I/O 压力不大,但是 CPU
# 资源已经吃光了,top -H 发现有大量的 bg 开头的线程(RocksDB 的 compaction 线程)在运行,这
# 个时候可以考虑用 I/O 资源换取 CPU 资源,将压缩方式改成"no:no:no:lz4:lz4:zstd:zstd"。总之,目
# 的是为了最大限度地利用系统的现有资源,使 TiKV 的性能在现有的资源情况下充分发挥。
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
# RocksDB memtable 的大小。
write-buffer-size = "128MB"
# 最多允许几个 memtable 存在。写入到 RocksDB 的数据首先会记录到 WAL 日志里面,然后会插入到
# memtable 里面,当 memtable 的大小到达了 write-buffer-size 限定的大小的时候,当前的
# memtable 会变成只读的,然后生成一个新的 memtable 接收新的写入。只读的 memtable 会被
# RocksDB 的 flush 线程(max-background-flushes 参数能够控制 flush 线程的最大个数)
# flush 到磁盘,成为 level0 的一个 sst 文件。当 flush 线程忙不过来,导致等待 flush 到磁盘的
# memtable 的数量到达 max-write-buffer-number 限定的个数的时候,RocksDB 会将新的写入
# stall 住,stall 是 RocksDB 的一种流控机制。在导数据的时候可以将 max-write-buffer-number
# 的值设置的更大一点,例如 10。
max-write-buffer-number = 5
# 当 level0 的 sst 文件个数到达 level0-slowdown-writes-trigger 指定的限度的时候,
# RocksDB 会尝试减慢写入的速度。因为 level0 的 sst 太多会导致 RocksDB 的读放大上升。
# level0-slowdown-writes-trigger 和 level0-stop-writes-trigger 是 RocksDB 进行流控的
# 另一个表现。当 level0 的 sst 的文件个数到达 4(默认值),level0 的 sst 文件会和 level1 中
# 有 overlap 的 sst 文件进行 compaction,缓解读放大的问题。
level0-slowdown-writes-trigger = 20
# 当 level0 的 sst 文件个数到达 level0-stop-writes-trigger 指定的限度的时候,RocksDB 会
# stall 住新的写入。
level0-stop-writes-trigger = 36
# 当 level1 的数据量大小达到 max-bytes-for-level-base 限定的值的时候,会触发 level1 的
# sst 和 level2 种有 overlap 的 sst 进行 compaction。
# 黄金定律:max-bytes-for-level-base 的设置的第一参考原则就是保证和 level0 的数据量大致相
# 等,这样能够减少不必要的 compaction。例如压缩方式为"no:no:lz4:lz4:lz4:lz4:lz4",那么
# max-bytes-for-level-base 的值应该是 write-buffer-size 的大小乘以 4,因为 level0 和
# level1 都没有压缩,而且 level0 触发 compaction 的条件是 sst 的个数到达 4(默认值)。在
# level0 和 level1 都采取了压缩的情况下,就需要分析下 RocksDB 的日志,看一个 memtable 的压
# 缩成一个 sst 文件的大小大概是多少,例如 32MB,那么 max-bytes-for-level-base 的建议值就应
# 该是 32MB * 4 = 128MB。
max-bytes-for-level-base = "512MB"
# sst 文件的大小。level0 的 sst 文件的大小受 write-buffer-size 和 level0 采用的压缩算法的
# 影响,target-file-size-base 参数用于控制 level1-level6 单个 sst 文件的大小。
target-file-size-base = "32MB"
# 在不配置该参数的情况下,TiKV 会将该值设置为系统总内存量的 40%。如果需要在单个物理机上部署多个
# TiKV 节点,需要显式配置该参数,否则 TiKV 容易出现 OOM 的问题。
# block-cache-size = "1GB"
[rocksdb.writecf]
# 保持和 rocksdb.defaultcf.compression-per-level 一致。
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
# 保持和 rocksdb.defaultcf.write-buffer-size 一致。
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
# 保持和 rocksdb.defaultcf.max-bytes-for-level-base 一致。
max-bytes-for-level-base = "512MB"
target-file-size-base = "32MB"
# 在不配置该参数的情况下,TiKV 会将该值设置为系统总内存量的 15%。如果需要在单个物理机上部署多个
# TiKV 节点,需要显式配置该参数。版本信息(MVCC)相关的数据以及索引相关的数据都记录在 write 这
# 个 cf 里面,如果业务的场景下单表索引较多,可以将该参数设置的更大一点。
# block-cache-size = "256MB"
[raftdb]
# RaftDB 能够打开的最大文件句柄数。
# max-open-files = 40960
# 可以通过该参数打开或者关闭 RaftDB 的统计信息。
# enable-statistics = true
# 开启 RaftDB compaction 过程中的预读功能,如果使用的是机械磁盘,建议该值至少为2MB。
# compaction-readahead-size = "2MB"
[raftdb.defaultcf]
# 保持和 rocksdb.defaultcf.compression-per-level 一致。
compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
# 保持和 rocksdb.defaultcf.write-buffer-size 一致。
write-buffer-size = "128MB"
max-write-buffer-number = 5
min-write-buffer-number-to-merge = 1
# 保持和 rocksdb.defaultcf.max-bytes-for-level-base 一致。
max-bytes-for-level-base = "512MB"
target-file-size-base = "32MB"
# 通常配置在 256MB 到 2GB 之间,通常情况下使用默认值就可以了,但如果系统资源比较充足可以适当调大点。
block-cache-size = "256MB"

TiDB 系统调优的官方建议:

代码语言:javascript复制
TiKV 内存使用情况
除了以上列出的 block-cache 以及 write-buffer 会占用系统内存外:
需预留一些内存作为系统的 page cache
TiKV 在处理大的查询的时候(例如 select * from ...)会读取数据然后在内存中生成对应的
数据结构返回给 TiDB,这个过程中 TiKV 会占用一部分内存。
TiKV 机器配置推荐
生产环境中,不建议将 TiKV 部署在 CPU 核数小于 8 或内存低于 32GB 的机器上
如果对写入吞吐要求比较高,建议使用吞吐能力比较好的磁盘
如果对读写的延迟要求非常高,建议使用 IOPS 比较高的 SSD 盘

实际使用过程中需要重点关注的参数:

代码语言:javascript复制
推荐的TiKV 参数配置   
sync-log = false
grpc-concurrency = 8
grpc-raft-conn-num = 24 
[defaultcf]
block-cache-size = "12GB"
[writecf]
block-cache-size = "5GB"
[raftdb.defaultcf]
block-cache-size = "2GB"
实际的配置:
[server]
grpc-concurrency = 4
grpc-raft-conn-num = 10
[raftstore]
sync-log = true
[rocksdb.defaultcf]
block-cache-size = "48331MB"
[rocksdb.writecf]
block-cache-size = "28998MB"
[raftdb.defaultcf]
block-cache-size = "2GB"

针对48线程的CPU和188G内存的主机配置修改:

代码语言:javascript复制
[server]
grpc-concurrency = 4   -->8
grpc-raft-conn-num = 10 -->24
[storage]
scheduler-concurrency = 2048000 
scheduler-worker-pool-size = 8  -->16
[coprocessor]
region-max-size = "144MB"  -->384MB
region-split-size = "96MB"  -->256MB
[raftstore]
region-split-check-diff = "6MB" -->32MB
sync-log = true  
[rocksdb]
max-background-jobs = 6   -->32
max-open-files = 40960    -->65535
[rocksdb.defaultcf]
block-cache-size = "48331MB"  -->80G
[rocksdb.writecf]
block-cache-size = "28998MB"   -->30G
[raftdb.defaultcf]
block-cache-size = "2GB"
此外参数需要调整为32MB。
target-file-size-base = "8MB" -->32MB

参考:

https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide

https://github.com/facebook/mysql-5.6/wiki/my.cnf-tuning

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/193460.html原文链接:https://javaforall.cn

0 人点赞