linux|性能参数测试

2023-03-18 17:45:23 浏览数 (2)

ommand

Description

echo "1" > /proc/sys/net/ipv4/tcp_window_scaling

Activate window scaling according to RFC 1323

echo "1" > /proc/sys/net/ipv4/tcp_timestamps

Activate timestamps according to RFC 1323

echo [wmax] > /proc/sys/net/core/rmem_max

Set maximum size of TCP receive window.

echo [wmax] > /proc/sys/net/core/wmem_max

Set maximum size of TCP transmit window.

echo [wmax] > /proc/sys/net/core/rmem_default

Set default size of TCP receive window.

echo [wmax] > /proc/sys/net/core/wmem_default

Set default size of TCP transmit window.

echo "[wmin] [wstd] [wmax]" > /proc/sys/net/ipv4/tcp_rmem

Set min, default, max receive window. Used by the autotuning function.

echo "[wmin] [wstd] [wmax]" > /proc/sys/net/ipv4/tcp_wmem

Set min, default, max transmit window. Used by the autotuning function.

echo "bmin bdef bmax" > /proc/sys/net/ipv4/tcp_mem

Set maximum total TCP buffer-space allocatable. Used by the autotuning function.

ifconfig eth? txqueuelen 1000

Define length of transmit queue. Replace "?" with actual interface number.

/proc/sys/net/ipv4/tcp_wmem

为自动调优定义socket使用的内存。第一个值是为socket发送缓冲区分配的最少字节数;第二个值是默认值(该值会被wmem_default覆盖),缓冲区在系统负载不重的情况下可以增长到这个值;第三个值是发送缓冲区空间的最大字节数(该值会被wmem_max覆盖)。

/proc/sys/net/ipv4/tcp_rmem

为自动调优定义socket使用的内存。第一个值是为socket接收缓冲区分配的最少字节数;第二个值是默认值(该值会被rmem_default覆盖),缓冲区在系统负载不重的情况下可以增长到这个值;第三个值是接收缓冲区空间的最大字节数(该值会被rmem_max覆盖)。

大文件制作

代码语言:javascript复制
#kilobytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200K

#megabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200M

#gigabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200G

#terabytes
dd if=/dev/zero of=filename bs=1 count=0 seek=200T

未限速情况文件下载

代码语言:javascript复制
heidsoft@heidsoft-dev:~$ wget --no-proxy http://172.16.59.20/bigfile
--2023-02-25 21:48:04--  http://172.16.59.20/bigfile
Connecting to 172.16.59.20:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10737418240 (10G) [application/octet-stream]
Saving to: ‘bigfile’

bigfile                     2%[                                    ] 239.71M  54.6MB/s    eta 3m 4s  ^C

增加带宽限速

代码语言:javascript复制
  tc  [ OPTIONS ] qdisc [ add | change | replace | link | delete ] dev DEV [ parent qdisc-id |
       root ] [ handle qdisc-id ] [ ingress_block BLOCK_INDEX ] [ egress_block BLOCK_INDEX ]  qdisc
       [ qdisc specific parameters ]

       tc  [  OPTIONS ] class [ add | change | replace | delete ] dev DEV parent qdisc-id [ classid
       class-id ] qdisc [ qdisc specific parameters ]

       tc [ OPTIONS ] filter [ add | change | replace | delete | get ] dev DEV [ parent qdisc-id  |
       root ] [ handle filter-id ] protocol protocol prio priority filtertype [ filtertype specific
       parameters ] flowid flow-id

       tc [ OPTIONS ] filter [ add | change | replace | delete | get ] block BLOCK_INDEX  [  handle
       filter-id  ]  protocol  protocol prio priority filtertype [ filtertype specific parameters ]
       flowid flow-id
代码语言:javascript复制
cat htb.sh 
#!/bin/bash
tc qd del dev ens33 root
tc qd add dev ens33 root handle 1: htb default 100
tc cl add dev ens33 parent 1: classid 1:1 htb rate 20000mbit burst 20k
tc cl add dev ens33 parent 1:1 classid 1:10 htb rate 1000mbit burst 20k
tc cl add dev ens33 parent 1:1 classid 1:100 htb rate 20000mbit burst 20k
tc qd add dev ens33 parent 1:10 handle 10: netem delay 200ms
tc qd add dev ens33 parent 1:100 handle 100: fq_codel
tc fi add dev ens33 protocol ip parent 1:0 prio 1 u32 match ip sport 80 0xffff flowid 1:10
代码语言:javascript复制
heidsoft@heidsoft-dev:~$ httping 172.16.59.20
PING 172.16.59.20:80 (/):
connected to 172.16.59.20:80 (239 bytes), seq=0 time=439.90 ms 
connected to 172.16.59.20:80 (239 bytes), seq=1 time=433.06 ms 
connected to 172.16.59.20:80 (239 bytes), seq=2 time=435.11 ms 
connected to 172.16.59.20:80 (239 bytes), seq=3 time=434.99 ms 
connected to 172.16.59.20:80 (239 bytes), seq=4 time=436.19 ms 
connected to 172.16.59.20:80 (239 bytes), seq=5 time=432.37 ms
代码语言:javascript复制
#!/bin/bash  
# Full path to tc binary 

TC=$(which tc)

#
# NETWORK CONFIGURATION
# interface - name of your interface device
# interface_speed - speed in mbit of your $interface
# ip - IP address of your server, change this if you don't want to use
#      the default catch all filters.
#
interface=eth0
interface_speed=100mbit
ip=4.1.2.3 # The IP address bound to the interface

# Define the upload and download speed limit, follow units can be 
# passed as a parameter:
# kbps: Kilobytes per second
# mbps: Megabytes per second
# kbit: kilobits per second
# mbit: megabits per second
# bps: Bytes per second
download_limit=512kbit
upload_limit=10mbit    


# Filter options for limiting the intended interface.
FILTER="$TC filter add dev $interface protocol ip parent 1: prio 1 u32"

#
# This function starts the TC rules and limits the upload and download speed
# per already configured earlier.
# 

function start_tc { 
    tc qdisc show dev $interface | grep -q "qdisc pfifo_fast 0"  
    [ "$?" -gt "0" ] && tc qdisc del dev $interface root; sleep 1  

    # start the tc configuration
    $TC qdisc add dev $interface root handle 1: htb default 30
    $TC class add dev $interface parent 1: classid 1:1 htb rate $interface_speed burst 15k

    $TC class add dev $interface parent 1:1 classid 1:10 htb rate $download_limit burst 15k
    $TC class add dev $interface parent 1:1 classid 1:20 htb rate $upload_limit burst 15k

    $TC qdisc add dev $interface parent 1:10 handle 10: sfq perturb 10
    $TC qdisc add dev $interface parent 1:20 handle 20: sfq perturb 10

    # Apply the filter rules
    
    # Catch-all IP rules, which will set global limit on the server
    # for all IP addresses on the server. 
    $FILTER match ip dst 0.0.0.0/0 flowid 1:10
    $FILTER match ip src 0.0.0.0/0 flowid 1:20

    # If you want to limit the upload/download limit based on specific IP address
    # you can comment the above catch-all filter and uncomment these:
    #
    # $FILTER match ip dst $ip/32 flowid 1:10
    # $FILTER match ip src $ip/32 flowid 1:20
}

#
# Removes the network speed limiting and restores the default TC configuration
#
function stop_tc {
    tc qdisc show dev $interface | grep -q "qdisc pfifo_fast 0"
    [ "$?" -gt "0" ] && tc qdisc del dev $interface root
}

function show_status {
        $TC -s qdisc ls dev $interface
}
#
# Display help 
#
function display_help {
        echo "Usage: tc [OPTION]"
        echo -e "tstart - Apply the tc limit"
        echo -e "tstop - Remove the tc limit"
        echo -e "tstatus - Show status"
}

# Start
if [ -z "$1" ]; then
        display_help
elif [ "$1" == "start" ]; then
        start_tc
elif [ "$1" == "stop" ]; then
        stop_tc
elif [ "$1" == "status" ]; then
        show_status
fi

调整tcp_wmem

代码语言:javascript复制
[root@MANAGER ~]# echo 52428800 52428800 52428800 >/proc/sys/net/ipv4/tcp_wmem

文件下载状态

代码语言:javascript复制
heidsoft@heidsoft-dev:~$ wget --no-proxy http://172.16.59.20/bigfile
--2023-02-25 21:54:40--  http://172.16.59.20/bigfile
Connecting to 172.16.59.20:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10737418240 (10G) [application/octet-stream]
Saving to: ‘bigfile.1’

bigfile.1                   3%[>                                   ] 334.96M  18.6MB/s    eta 16m 58s^
代码语言:javascript复制
How To: Network / TCP / UDP Tuning
This is a very basic step by step description of how to improve the performance networking (TCP & UDP) on Linux 2.4  for high-bandwidth applications. These settings are especially important for GigE links. Jump to Quick Step or All The Steps.
Assumptions
This howto assumes that the machine being tuned is involved in supporting high-bandwidth applications. Making these modifications on a machine that supports multiple users and/or multiple connections is not recommended – it may cause the machine to deny connections because of a lack of memory allocation.
The Steps
1. Make sure that you have root privleges.
2. Type: sysctl -p | grep mem
This will display your current buffer settings. Save These! You may want to roll-back these changes
3. Type: sysctl -w net.core.rmem_max=8388608
This sets the max OS receive buffer size for all types of connections.
4. Type: sysctl -w net.core.wmem_max=8388608
This sets the max OS send buffer size for all types of connections.
5. Type: sysctl -w net.core.rmem_default=65536
This sets the default OS receive buffer size for all types of connections.
6. Type: sysctl -w net.core.wmem_default=65536
This sets the default OS send buffer size for all types of connections.
7. Type: sysctl -w net.ipv4.tcp_mem=’8388608 8388608 8388608′
TCP Autotuning setting. “The tcp_mem variable defines how the TCP stack should behave when it comes to memory usage. … The first value specified in the tcp_mem variable tells the kernel the low threshold. Below this point, the TCP stack do not bother at all about putting any pressure on the memory usage by different TCP sockets. … The second value tells the kernel at which point to start pressuring memory usage down. … The final value tells the kernel how many memory pages it may use maximally. If this value is reached, TCP streams and packets start getting dropped until we reach a lower memory usage again. This value includes all TCP sockets currently in use.”
8. Type: sysctl -w net.ipv4.tcp_rmem=’4096 87380 8388608′
TCP Autotuning setting. “The first value tells the kernel the minimum receive buffer for each TCP connection, and this buffer is always allocated to a TCP socket, even under high pressure on the system. … The second value specified tells the kernel the default receive buffer allocated for each TCP socket. This value overrides the /proc/sys/net/core/rmem_default value used by other protocols. … The third and last value specified in this variable specifies the maximum receive buffer that can be allocated for a TCP socket.”
9. Type: sysctl -w net.ipv4.tcp_wmem=’4096 65536 8388608′
TCP Autotuning setting. “This variable takes 3 different values which holds information on how much TCP sendbuffer memory space each TCP socket has to use. Every TCP socket has this much buffer space to use before the buffer is filled up. Each of the three values are used under different conditions. … The first value in this variable tells the minimum TCP send buffer space available for a single TCP socket. … The second value in the variable tells us the default buffer space allowed for a single TCP socket to use. … The third value tells the kernel the maximum TCP send buffer space.”
10. Type:sysctl -w net.ipv4.route.flush=1
This will enusre that immediatly subsequent connections use these values.
Quick Step
Cut and paste the following into a linux shell with root privleges:
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.wmem_max=8388608
sysctl -w net.core.rmem_default=65536
sysctl -w net.core.wmem_default=65536
sysctl -w net.ipv4.tcp_rmem=’4096 87380 8388608′
sysctl -w net.ipv4.tcp_wmem=’4096 65536 8388608′
sysctl -w net.ipv4.tcp_mem=’8388608 8388608 8388608′
sysctl -w net.ipv4.route.flush=1
Another set of parameters suggested from VMWare Communities :
net.core.wmem_max = 16777216
net.core.rmem_max = 16777216
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_rmem = 4096 262144 16777216
net.ipv4.tcp_wmem = 4096 262144 16777216
net.core.optmem_max = 524288
net.core.netdev_max_backlog = 200000
  • https://www.cnblogs.com/fczjuever/archive/2013/04/17/3026694.html
  • https://github.com/leandromoreira/linux-network-performance-parameters/blob/master/README.md
  • https://github.com/cilium/pwru
  • https://www.brendangregg.com/perf.html
  • http://proj.sunet.se/E2E/tcptune.html
  • https://github.com/penberg/linux-networking
  • https://cloud.tencent.com/developer/article/1409664
  • https://gist.github.com/Lakshanz/19613830e5c6f233754e12b25408cc51

0 人点赞