...
Group-Specific Queries are sent when a router receives a State-Change record indicating a system is leaving a group. VM's do not receive the router queries to update Multicast tables accordingly IGMPv3 Specific Query --- not dropped by vDSIGMPv3 SSM Specific Query -- dropped by vDS Iperf command produces an IGMPv3 SSM (Source-Specific Multicast) join. When one of the clients Leaves, the physical switch responds with an SSM Specific Query. That is what is being dropped.
This article is to explain a known issue with Multicast Source-Specific messages that are dropped.Some configurations work around this issue by allowing all multicast packets but in environments where a large amount of multicast traffic is seen more filtering is needed. IGMP, Snooping/SSM
ESXi does not treat SSM group-specific queries with src IP 0.0.0.0 correctly, no matter the mode.Some physical switches like Dell/Cisco, rely on the IGMPv3 group-specific query to make sure there is still a port subscribed to a specific group or not when receiving a leave report.
This is fixed in vSphere 6.7 p06 and 7.0.3The fix makes IGMPv3 group-specific queries have the same code logicas a general query to leverage legacy lookup to dispatch, in this way,as long as the VNIC joins a group, the group's specific query will bedispatched to this VNIC.
To work around this issue allowing all multicast to the specific vnic should be configured Note: Additional traffic load will be sent to the VM increasing CPU. This may not be a suitable workaround in environments with large amounts of multicast traffic.Linux For Linux VM, we can set the ETH_FILTER_ALLMULTI flag with VMs' ethX by using'ifconfig ethX allmulti'for vmxnet3 driver.[root@ESXi:~] vsish -e get /net/portsets/DvsPortset-0/ports/67108870/status | grep -i flags flags:port flags: 0x40013 -> IN_USE ENABLED WORLD_ASSOC CONNECTED ASSOCIATE_TO_L2_CTRL Impl customized blocked flags:0x00000000 flags:0x0000000b flags:0x0000000b[root@CentOS-1 ~]# ifconfig ens192 allmulti[root@ESXi:~] vsish -e get /net/portsets/DvsPortset-0/ports/67108870/status | grep -i flags flags:port flags: 0x40013 -> IN_USE ENABLED WORLD_ASSOC CONNECTED ASSOCIATE_TO_L2_CTRL Impl customized blocked flags:0x00000000 flags:0x0000000d flags:0x0000000d Windows E1000e The e1000e driver in the Windows guest will set ETH_FILTER_ALLMULTI flag by default allowing all multicast packets vmxnet3 Configure inside Guest OS The default value is Disabled. -MulticastForwardingSpecifies a value for multicast forwarding. The cmdlet modifies the value for this setting. The acceptable values for this parameter are:Enabled. The computer can forward multicast packets.Disabled. The computer cannot forward multicast packets.
To Reproduce this issue To get multicast membership [root@ds-tse-h42:~] net-stats -l | grep -i cent67108874 5 9 DvsPortset-0 00:50:56:b7:8d:4b CentOS-1.eth067108875 5 9 DvsPortset-0 00:50:56:b7:b5:ef CentOS-2.eth0[root@ds-tse-h42:~] netdbg vswitch mcast_filter get --port 67108875 --dvs-alias Multicast-vDS33:33:ff:b7:b5:ef33:33:00:00:00:0101:00:5e:00:00:01 1. Multicast clients CentOS-1 and CentOS-2 both join the SSM group: #iperf -s -B 239.255.1.3 -H 10.0.30.101 -u -f m -i 5 Note: This is only supported in Iperf2 for this protocol 2. Wait 5 seconds for each client to send 2-3 join packets 3. Make CentOS-1 leave the group: Remove Iperf#Ctrl + C 4. The multicast router receives the Leave Request, and responds with several Specific Queries (from Dell and Cisco switches) and a Group-and-Source Specific Query (from Cisco switch) to see if there are any multicast clients left on that physical switch port: 5. ESXi correctly removes CentOS-1 from the group, and CentOS-2 remains in the group: [root@ds-tse-h42:~] netdbg vswitch mcast_filter get --port 67108875 --dvs-alias Multicast-vDS33:33:ff:b7:b5:ef33:33:00:00:00:0101:00:5e:7f:01:03 <--- This group corresponds to 239.255.1.301:00:5e:00:00:01
Click on a version to see all relevant bugs
VMware Integration
Learn more about where this data comes from
Bug Scrub Advisor
Streamline upgrades with automated vendor bug scrubs
BugZero Enterprise
Wish you caught this bug sooner? Get proactive today.