在前面一篇文章《learning:vrrp plugins (2)》中,介绍了一下vrrp 单播模式的配置,恰巧在vpp-dev邮箱中有人问到vrrp单播模式如何配置?vrrp插件维护者Matthew Smith也作出了解答,并未理解作者设计意图及单播模式如何工作?云计算场景中如何应用?
代码语言:javascript复制device A:
set int ip address GigabitEthernet0/14/0 10.10.10.10/24
set int state GigabitEthernet0/14/0 up
vrrp vr add GigabitEthernet0/14/0 vr_id 1 priority 250 10.10.10.15 unicast
vrrp peers GigabitEthernet0/14/0 vr_id 1 10.10.10.5
vrrp proto start GigabitEthernet0/14/0 vr_id 1
device B:
set int ip address GigabitEthernet0/14/0 10.10.10.5/24
set int state GigabitEthernet0/14/0 up
vrrp vr add GigabitEthernet0/14/0 vr_id 1 priority 200 10.10.10.15 unicast
vrrp peers GigabitEthernet0/14/0 vr_id 1 10.10.10.10
vrrp proto start GigabitEthernet0/14/0 vr_id 1
这应该会导致你的两个实例选择设备A作为主设备。它应该在设备B运行时发送广告。如果收到10.10.10.15的ARP请求,它也应该响应这些请求,但它会响应一个VRRP虚拟MAC地址,这可能不是单播场景的正确行为。我最初添加了发送单播广告的功能,因为我认为它可能对不支持多播的云环境(AWS、Azure)有用。但是用VRRP虚拟MAC地址回复ARP请求对于云环境可能是无效的。或者这可能并不重要,因为ARP请求可能是由云环境中的基础设施处理的,从来没有实际交付到VPP运行的虚拟机,我不确定。 您原来的命令在每个VR上启用接受模式,并在配置VR的接口上添加VR虚拟IP地址(10.10.10.10/24)。一般情况下,使用接受模式时,不需要在接口上配置VR虚拟IP地址。只能在优先级为255(虚拟IP地址的“所有者”)的虚拟机接口上配置虚拟IP地址。对于优先级小于255的VR设备,当VR设备切换到主状态时,该地址会自动添加,当VR设备从主状态切换到备份状态时,该地址会自动删除。如果使用单播广告,我不记得启用接受模式是否有任何作用。正如我提到的,该功能是为云环境设计的,在那些环境中,仅仅在接口上添加IP地址是行不通的,需要采取一些外部行动(使用AWS/Azure API从一个主机/接口上删除地址,然后添加到另一个主机/接口上)。
上述为有道云翻译,建议直接去阅读原文:https://lists.fd.io/g/vpp-dev/topic/how_to_config_vrrp_unicast/87993023
单播模式中建议取消accept-mode,accept-mode主要用于组播模式。用于接收vrrp组播报文,只有当vrrp处于master状态时,才使能接口ip4-multicast arc feature使能vrrp4-accept-owner-input,当并未感觉此节点存在的价值?
代码语言:javascript复制DBGvpp# show interface feat GigabitEthernet2/4/0
ip4-multicast:
vrrp4-accept-owner-input
因为在vrrp主备设备上都存在224.0.0.18组播表如下:
代码语言:javascript复制DBGvpp# show ip mfib 224.0.0.18
ipv4-VRF:0, fib_index:0 flags:none
(*, 224.0.0.18/32):
fib:0 index:8 locks:1
src:API flags:none locks:1:
path-list:[16] locks:2 flags:no-uRPF, uRPF-list: None
path:[18] pl-index:16 ip4 weight=1 pref=0 receive: oper-flags:resolved, cfg-flags:local,
[@0]: dpo-receive
path:[20] pl-index:16 ip4 weight=1 pref=0 attached: oper-flags:resolved,
GigabitEthernet2/4/0
Extensions:
path:18 flags:Forward,
path:20 flags:Accept,
Interface-Forwarding:
GigabitEthernet2/4/0: Accept,
Interfaces:
GigabitEthernet2/4/0: Accept,
multicast-ip4-chain
[@1]: dpo-replicate: [index:6 buckets:1 flags:[has-local ] to:[1718:79028]]
[0] [@1]: dpo-receive
但是不清楚为什么需要增加了vrrp4-accept-owner-input,从代码流程上来看,该节点主要是用于查询设备是否存在对应vrid的配置,且处于master状态并且启用accept模式,则会跳过组播路由查询,直接送到vrrp4-input节点;否则需要查询组播路由表。当前节点未感觉到有什么特殊的用途?貌似好像dip是其他组播地址,无需查询组播路由,也能送到vrrp4-input节点了?
代码语言:javascript复制static_always_inline void
vrrp_accept_owner_next_node (u32 sw_if_index, u8 vr_id, u8 is_ipv6,
u32 *next_index, u32 *error)
{
vrrp_vr_t *vr = vrrp_vr_lookup (sw_if_index, vr_id, is_ipv6);
if (vr && (vr->runtime.state == VRRP_VR_STATE_MASTER) &&
(vr->config.flags & VRRP_VR_ACCEPT))
{
*next_index = VRRP_ACCEPT_OWNER_NEXT_PROCESS;
*error = VRRP_ACCEPT_OWNER_ERROR_PROCESSED;
}
}
vrrp4-accept-owner-input节点next node如下:
代码语言:javascript复制DBGvpp# show vlib graph vrrp4-accept-owner-input
Name Next Previous
vrrp4-accept-owner-input vrrp4-input [0] ip4-mpls-label-disposition
ip4-mfib-forward-lookup [1ip4-mpls-label-disposition
ip4-input-no-checksum
ip4-input
在组播模式下,在Backup设备抓取trace流程如下:
代码语言:javascript复制Packet 1
00:03:40:726434: dpdk-input
GigabitEthernet2/4/0 rx queue 0
buffer 0x9bd41: current data 0, length 60, buffer-pool 0, ref-count 1, totlen-nifb 0, trace handle 0x0
ext-hdr-valid
l4-cksum-computed l4-cksum-correct
PKT MBUF: port 2, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x0, data_off 128, phys_addr 0x1bef50c0
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
IP4: 00:00:5e:00:01:01 -> 01:00:5e:00:00:12
VRRP: 192.168.90.100 -> 224.0.0.18
tos 0x00, ttl 255, length 32, checksum 0xc04e dscp CS0 ecn NON_ECN
fragment id 0x0000
00:03:40:726515: ethernet-input
frame: flags 0x3, hw-if-index 3, sw-if-index 3
IP4: 00:00:5e:00:01:01 -> 01:00:5e:00:00:12
00:03:40:726543: ip4-input-no-checksum
VRRP: 192.168.90.100 -> 224.0.0.18
tos 0x00, ttl 255, length 32, checksum 0xc04e dscp CS0 ecn NON_ECN
fragment id 0x0000
00:03:40:726556: ip4-mfib-forward-lookup
fib 0 entry 8
00:03:40:726565: ip4-mfib-forward-rpf
entry 8 itf 3 flags Accept,
00:03:40:726570: ip4-replicate
replicate: 6 via [@1]: dpo-receive
00:03:40:726579: ip4-receive
VRRP: 192.168.90.100 -> 224.0.0.18
tos 0x00, ttl 255, length 32, checksum 0xc04e dscp CS0
ecn NON_ECN
fragment id 0x0000
00:03:40:726596: vrrp4-input
VRRP: sw_if_index 3 IPv4
ver 3, type 1, VRID 1, prio 200, n_addrs 1, interval 100c
s, csum 0x21f0
addresses: 192.168.90.50
00:03:40:726632: error-drop
rx:GigabitEthernet2/4/0
00:03:40:726640: drop
vrrp4-input: VRRP packets processed
在单播模式下,在Backup设备抓取trace流程如下:
代码语言:javascript复制 01:52:00:434742: dpdk-input
GigabitEthernet2/4/0 rx queue 0
buffer 0x8ac58: current data 0, length 60, buffer-pool 0, ref-count 1, totlen-nifb 0, trace handle 0x0
ext-hdr-valid
l4-cksum-computed l4-cksum-correct
PKT MBUF: port 2, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x0, data_off 128, phys_addr 0x1dcb1680
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
IP4: 00:0c:29:a2:43:f5 -> 00:0c:29:07:6f:b8
VRRP: 192.168.90.100 -> 192.168.90.101
tos 0x00, ttl 255, length 32, checksum 0x8553 dscp CS0 ecn NON_ECN
fragment id 0x0000
01:52:00:434993: ethernet-input
frame: flags 0x3, hw-if-index 3, sw-if-index 3
IP4: 00:0c:29:a2:43:f5 -> 00:0c:29:07:6f:b8
01:52:00:435016: ip4-input-no-checksum
VRRP: 192.168.90.100 -> 192.168.90.101
tos 0x00, ttl 255, length 32, checksum 0x8553 dscp CS0 ecn NON_ECN
fragment id 0x0000
01:52:00:435024: ip4-lookup
fib 0 dpo-idx 8 flow hash: 0x00000000
VRRP: 192.168.90.100 -> 192.168.90.101
tos 0x00, ttl 255, length 32, checksum 0x8553 dscp CS0 ec
n NON_ECN
fragment id 0x0000
01:52:00:435036: ip4-receive
VRRP: 192.168.90.100 -> 192.168.90.101
tos 0x00, ttl 255, length 32, checksum 0x8553 dscp CS0
ecn NON_ECN
fragment id 0x0000
01:52:00:435043: vrrp4-input
VRRP: sw_if_index 3 IPv4
ver 3, type 1, VRID 1, prio 200, n_addrs 1, interval 100c
s, csum 0x26b5
addresses: 192.168.90.50
01:52:00:435077: error-drop
rx:GigabitEthernet2/4/0
01:52:00:435085: drop
vrrp4-input: VRRP packets processed
在配置accept-mode模式下,且在代码中添加vip的地址,注释掉对单播模式的判断,vrrp功能使用也正常。提交了patch可能不符合作者设计意图但未收《https://gerrit.fd.io/r/c/vpp/ /34768》。
按照作者配置要求取消了accept-mode并不能正常工作?不清楚具体如何使用,后续跟踪vrrp单播模式邮件解答。如有了解欢迎加群讨论。