本文只是简单的参考vpp文件实现基本memif 接口实现vpp之间的通信。
- memif介绍
memif是一种性能非常高的直接内存接口类型,可以在Vpp程序之间使用。它使用文件套接字作为控制通道来设置共享内存。
- 基于memif 实现两个vpp的通信
- 分别设置vpp1 和vpp2的启动配置文件: startup1.conf 文件内容如下 startup2.conf 中的 cli-listen修改为cli-vpp2.sock
##cat startup1.conf
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli-vpp1.sock
gid vpp
}
api-trace {
on
}
api-segment {
gid vpp
}
socksvr {
default
}
cpu {
}
这样我们就可以在一个环境上分别启动vpp1和vpp2。
vpp1 配置如下:
代码语言:javascript复制vppctl -s /run/vpp/cli-vpp1.sock create interface memif id 0 master
vppctl -s /run/vpp/cli-vpp1.sock set interface state memif0/0 up
vppctl -s /run/vpp/cli-vpp1.sock set interface ip addr memif0/0 192.168.1.1/24
vpp2配置如下:
代码语言:javascript复制vppctl -s /run/vpp/cli-vpp2.sock create interface memif id 0 slave
vppctl -s /run/vpp/cli-vpp2.sock set interface state memif0/0 up
vppctl -s /run/vpp/cli-vpp2.sock set interface ip addr memif0/0 192.168.1.2/24
这样就vpp1和vpp2 就通过memif方式搭建OK了,可以在互相ping对方都是可以ping通的。
代码语言:javascript复制DBGvpp# ping 192.168.1.1
116 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=32.0338 ms
116 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=32.0211 ms
116 bytes from 192.168.1.1: icmp_seq=4 ttl=64 time=24.6519 ms
116 bytes from 192.168.1.1: icmp_seq=5 ttl=64 time=17.2863 ms
Statistics: 5 sent, 4 received, 20% packet loss
下面是通过设置trace抓取的对方收包信息.主要这里不是dpdk-input而是memif-input。
代码语言:javascript复制DBGvpp# clear trace
DBGvpp# trace add memif-input 10
DBGvpp# show trace
------------------- Start of thread 0 vpp_main -------------------
Packet 1
04:00:50:799459: memif-input
memif: hw_if_index 3 next-index 4
slot: ring 0
04:00:50:799468: ethernet-input
frame: flags 0x1, hw-if-index 3, sw-if-index 3
IP4: 02:fe:a1:90:12:a0 -> 02:fe:73:bf:65:8b
04:00:50:799473: ip4-input
ICMP: 192.168.1.2 -> 192.168.1.1
tos 0x00, ttl 254, length 96, checksum 0x3949
fragment id 0x0000
ICMP echo_request checksum 0x3551
- 使用VPP extras目录libmemif 来测试一下
######编译
cd vpp/extras/libmemif
mkdir build
cd build
cmake ..
make install
#运行
[14:38:16]root:memif$ ./icmp_responder-epoll
LIBMEMIF EXAMPLE APP: ICMP_Responder
==============================
libmemif version: 3.1
memif version: 512
commands:
help - prints this help
exit - exit app
conn <index> <mode> [<interrupt-desc>] - create memif. index is also used as interface id, mode 0 = slave 1 = master, interrupt-desc none = default 0 = if ring is full wait 1 = handle only ARP requests
del <index> - delete memif
show - show connection details
ip-set <index> <ip-addr> - set interface ip address
rx-mode <index> <qid> <polling|interrupt> - set queue rx mode
sh-count - print counters
cl-count - clear counters
send <index> <tx> <ip> <mac> - send icmp
conn 0 0 #启动了解
INFO: memif connected!
show #查看的当前接口配置的
MEMIF DETAILS
==============================
interface index: 0
interface ip: 192.168.1.2
interface name: memif_connection
app name: ICMP_Responder
remote interface name: memif0/0
remote app name: VPP 19.08.1-304~g7984cd97b-dirt
id: 0
secret: (null)
role: slave
mode: ethernet
socket filename: /run/vpp/memif.sock
rx queues:
queue id: 0
ring size: 2048
ring rx mode: interrupt
ring head: 2070
ring tail: 22
buffer size: 2048
tx queues:
queue id: 0
ring size: 2048
ring rx mode: polling
ring head: 22
ring tail: 22
buffer size: 2048
link: up
这样我们在vpp1上来ping 192.168.1.2 也能ping通。
总结:
本文只是简单地学习了基于memif的搭建vpp之间的通信配置。以及使用libmemif 和vpp之间进行建联。具体底层实现细节还需要深入研究。
参考链接:
- https://fd.io/docs/vpp/master/gettingstarted/progressivevpp/twovppinstances.html
- https://docs.fd.io/vpp/17.10/libmemif_example_setup_doc.html
- https://doc.dpdk.org/guides/nics/memif.html