...
Release date: July 07, 2015 Patch Category BugfixPatch SeverityCriticalBuildFor build information, see KB 2111982.Host Reboot RequiredYesVirtual Machine Migration or Shutdown RequiredYesAffected HardwareN/AAffected SoftwareN/AVIBs IncludedVMware:esx-base:6.0.0-0.11.2809209PRs Fixed1316606, 1370300, 1373180, 1375209, 1380638, 1383027, 1383201, 1383497, 1384196, 1386885, 1389648, 1394010, 1394481, 1397096, 1400127, 1400137, 1400396, 1401036, 1401079, 1401410, 1401736, 1402206, 1404041, 1406764, 1407391, 1408748, 1411484, 1417354, 1423644, 1424506, 1426901, 1429000, 1431366Related CVE numbersNA
Summaries and Symptoms This patch updates the esx-base VIB to resolve the following issues: When the UserWorld is stuck in HeapMoreCore with an infinite timeout due to an improper stop order, you are unable to end the sfcb process. You see error similar to: failed to kill /sbin/sfcbd (8314712): No such process Attempts to reboot Windows 8 and Windows Server 2012 virtual machine on an ESXi host might fail. For more information, see Windows 8 and Windows 2012 Server virtual machines fail upon reboot (2092807). When a virtual machine is deployed or cloned with guest customization and the VMware Tools Upgrade Policy is set to allow the VM to automatically upgrade VMware Tools at next power on, VMware Tools might fail to automatically upgrade when the VM is powered on for the first time. When an ESXi host has three or more vmknics, if you reset network settings from DCUI or apply a Host Profile where the vmknics are on a DVS, including the management vmknic, a Hostctl exception might occur. This might cause the host to become unusable with no connectivity until it is rebooted. An ESXi host might fail when you attempt to expand a VMFS5 datastore beyond 16TB. In the vmkernel.log file, you see error similar to: cpu38:xxxxx)LVM: xxxx: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks) cpu38:xxxxx)LVM: xxxx: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks) cpu47:xxxxx)LVM: xxxxx: LVM device naa.600000e00d280000002800c000010000:1 successfully expanded (new size: 31314089590272) cpu47:xxxxx)Vol3: xxx: Unable to register file system ds02 for APD timeout notifications: Already exists cpu47:xxxxx)LVM: xxxx: Using all available space (15657303277568). cpu7:xxxxx)LVM: xxxx: Error adding space (0) on device naa.600000e00d280000002800c000010000:1 to volume xxxxxxxx-xxxxxxxx- xxxx-xxxxxxxxxxxx: No space left on device cpu7:xxxxx)LVM: xxxx: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol xxxxxxxx- xxxxxxxx-xxxx-xxxxxxxxxxxx: Limit exceeded cpu7:xxxxx)LVM: xxxx: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded cpu7:xxxxx)LVM: xxxx: LVMProbeDevice failed for device naa.600000e00d280000002800c000010000:1: Limit exceeded cpu32:xxxxx)<3>ata1.00: bad CDB len=16, scsi_op=0x9e, max=12 cpu30:xxxxx)LVM: xxxx: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol xxxxxxxx- xxxxxxxx-xxxx-xxxxxxxxxxxx: Limit exceeded cpu30:xxxxx)LVM: xxxx: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded An ESXi host that is part of a vSAN cluster with 40 or more nodes might display a purple diagnostic screen due to a limit check when the nodes are added back into the membership list of a new primary after primary node failover. The virtual hardware versions prior to version 11 incorrectly claim support for Page Attribute Table (PAT) in CPUID[1].EDX[PAT]. This patch resolves this issue by extending support for the IA32_PAT MSR to all versions of virtual hardware. Note: This support is limited to recording the guest's PAT in the IA32_PAT MSR. The guest's PAT does not actually influence the memory types used by the virtual machine. When you set the CPU limit of a single processor virtual machine, the overall ESXi utilization might decrease due to a defect in the ESXi scheduler. This happens when the ESXi scheduler is making incorrect CPU load balancing estimations, and considers virtual machines as running. For more details, see Setting the CPU limit of virtual machines may impact the ESXi utilization on overcommitted systems (2096897). The ESXi WSMAN agent (Openwsman) included in the ESXi 5.0 Update 3 or ESXi Patch Release ESXi500-201406001, the ESXi 5.1 Update 2 or ESXi Patch Release ESXi510-201407001, or the ESXi 5.5 Update 2 might not support array parameter to createInstance(). When you run wsmand service to create a CIM instance with array type property value, using createInstanse() in Openwsman, you see messages similar to: wsmand[6266]: working on property: DataSize wsmand[6266]: prop value: 572 wsmand[6266]: xml2property([0xnnnn]DataSize:572) wsmand[6266]: working on property: PData wsmand[6266]: prop value: 7 wsmand[6266]: xml2property([0xnnnn]PData:7) wsmand[6266]: *** xml2data: Array unsupported wsmand[6266]: working on property: ReturnCode wsmand[6266]: prop value: 0 The Automatic option for virtual machine startup or shutdown might not work when the vmDelay variable value is set to more than 1800 seconds. This may occur in these situations: If the vmDelay variable is set to 2148 seconds or more, the automatic virtual machine startup or shutdown might not be delayed, and might cause the hostd service to fail.If the vmDelay variable is set to more than 1800 seconds, then the vim-cmd command hostsvc/autostartmanager/autostart may not delay the auto startup or shutdown tasks on a virtual machine. This is because the command might timeout if the task is not completed within 30 minutes. Note: Specify the blockingTimeoutSeconds value in the hostd configuration file, /etc/vmware/hostd/config.xml. If the sum of delays is larger than 1800 seconds, then you must set blockingTimeoutSeconds to a value larger than 1800 seconds. For example: <vimcmd> <soapStubAdapter> <blockingTimeoutSeconds>7200</blockingTimeoutSeconds> </soapStubAdapter> </vimcmd> The sfcbd service might stop responding and you might find this error message in the syslog file: spSendReq/spSendMsg failed to send on 7 (-1) Error getting provider context from provider manager: 11 This issue occurs when there is a contention for semaphore between the CIM server and the providers. Common Information Model (CIM) provider running on an ESXi host might experience memory leak while sending CIM indications from Small-Footprint CIM Broker (sfcb) service. The iSCSI network port-binding fails even when there is only one active uplink on a switch. This patch resolves this issue by counting only the active uplinks to decide if the VMkernel interface is compliant or not. Reducing the proportionalCapacity policy does not affect the disk usage. This is because modifications made to the policy parameters are not passed on to the components on which they are applied. The values of TX and RX throughput statistics might be very high leading to unnecessary remapping of source ports to different VMNICs. This might be due to a miscalculation of statistics by the Load-Based Teaming algorithm. An ESXi host or a virtual machine might lose network connectivity after you enable port mirroring sessions on the vSphere Distributed Switch. The openwsmand service might stop responding when you change RAID controller properties using the ModifyInstance option. This happens when properties for the following are changed: Rebuild priorityConsistency check priorityPatrol read priority An ESXi host might fail to send CIM indications from sfcb to ServerView Operations Manager after you reboot the host. An error similar to the following is written to syslog file: spGetMsg receiving from 72 20805-11 Resource temporarily unavailable rcvMsg receiving from 72 20805-11 Resource temporarily unavailable --- activate filter failed for indication subscription filter=root/interop:cim_indicationfilter.creationclassname="CIM_IndicationFilter",name="FTSIndicationFilter",systemcreationclassname="CIM_ComputerSystem",systemname="xx.xxx.xxx.xx", handler=root/interop:cim_indicationhandlercimxml.creationclassname="CIM_IndicationHandlerCIMXML",name="FTSIndicationListener:xx.xxx.xxx.xx", systemcreationclassname="CIM_ComputerSystem",systemname="xx.xxx.xxx.xx", status: rc 7, msg No supported indication classes in filter query or no provider found If the CIM client sends two requests of Delete Instance to the same CIM indication subscription, the sfcb-vmware_int service might stop responding due to memory contention. You might not be able to monitor the hardware status with the vCenter Server and ESXi. Slow NFS storage performance is observed on virtual machines running on VSA provisioned NFS storage. Delayed acknowledgements from the ESXi host for the NFS Read responses might cause this performance issue. This patch resolves this issue by disabling delayed acknowledgements for NFS connections. This patch has been updated to include the option to pass an iSCSI initiator name to the esxcli iscsi software set command. Attempts to provision a virtual machine using a storage policy with the Flash Read Cache Reservation attribute fails in a vSAN All-flash cluster environment. Persistently mounted VMFS snapshot volumes might not get mounted after you reboot the ESXi host. Log messages similar to following are written to the syslog file: localcli: Storage Info: Unable to Mount VMFS volume with UUID nnnnnnnn-nnnnnnnn-nnnn-nnnnnnnnnnnn. Sysinfo error on operation returned status : Bad parameter count. Please see the VMkernel log for detailed error information localcli: StorageInfo: Unable to restore one or more conflict-resolved VMFS volumes When you limit the Input Output Per Second (IOPS) value for a disk from a virtual machine, you see reduced IOPS than the configured limit for the read-write operation (I/O), if the size of the read-write operation (I/O) is greater than or equal to 32 KB. This is because I/O scheduler considers 32K as one scheduling cost unit of an IO operation. Any operation of size greater than 32 KB is considered as multiple operations and results in throttling I/O. This patch resolves this issue by making the SchedCostUnit value configurable as per the application requirement. To view the current value, run this command: esxcfg-advcfg -g /Disk/SchedCostUnit To set a new value, run this command: esxcfg-advcfg -s 65536 /Disk/SchedCostUnit After upgrading firmware, false alarms appear in the Hardware Status tab of the vSphere Client even if the system is idle for two to three days. In the /var/log/syslog.log file, you see entries similar to: sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x8 FAILED cc=0xffffffff sfcb-vmware_raw[nnnnn]: IpmiIfcFruChassis: Reading FRU Chassis Info Area length for 0x0 FAILED sfcb-vmware_raw[nnnnn]: IpmiIfcFruBoard: Reading FRU Board Info details for 0x0 FAILED cc=0xffffffff sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x70 FAILED cc=0xffffffff sfcb-vmware_raw[nnnnn]: IpmiIfcFruProduct: Reading FRU product Info Area length for 0x0 FAILED sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: data length mismatch req=19,resp=3 sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0001,resp=0002 sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0002,resp=0003 sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0003,resp=0004 sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0004,resp=0005 sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0005,resp=0006 sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0006,resp=0007 When you delete a VDI environment enabled desktop pool, the VMDK files of the other virtual machine from a different desktop pool might get deleted. Multiple virtual machines from different desktop pools might be affected. This happens when after deleting the disk, the parent directory gets deleted due to an error where the directory is perceived as empty, even though it is not. The virtual machine might fail to power on with the error: [nnnnn info 'Default' opID=nnnnnnnn] [VpxLRO] -- ERROR task-19533 -- vm-1382 -- vim.ManagedEntity.destroy: vim.fault.FileNotFound: --> Result: --> (vim.fault.FileNotFound) { --> dynamicType = <UNSET></UNSET>, --> faultCause = (vmodl.MethodFault) null, --> file = "[cntr-1] guest1-vm-4-vdm-user-disk-D-nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn.vmdk", --> msg = "File [cntr-1] guest1-vm-4-vdm-user-disk-D-nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn.vmdk was not found", --> } --> Args VMDK deletion occurs when a particular virtual machine's guest aperating system and user data disk are spread across different datastores. This issue is not visible when all VM files reside in the same datastore. The vmkiscsid process might stop responding when you run an iSCSI adapter rescan operation using IPv6. An ESXi host might not receive SNMP v3 traps when you are using a third-party management tool to collect SNMP data. In the /var/snmp/syslog.log file, you see entries similar to: snmpd: snmpd: snmp_main: rx packet size=151 from: 172.20.58.220:59313 snmpd: snmpd: SrParseV3SnmpMessage: authSnmpEngineBoots(0) same as 0, authSnmpEngineTime(2772) within 0 +- 150 not in time window .... For further information, see SNMPv3 traps are not being received on VMware ESX 5.1 and ESXi 5.5 (2108901). The vSAN Observer is unable to collect statistics when hostd is not reachable as the collection happens through hostd. This patch introduces a lightweight vSAN Observer capable of collecting statistics without requiring hostd. Attempts to boot an ESXi 6.0 host from an iSCSI SAN might fail. This happens when the ESXi host is unable to detect the iSCSI Boot Firmware Table (iBFT), causing boot to fail. This issue might occur with any iSCSI adapter, including Emulex and QLogic. The setPEContext VASA API call to a provider might fail. In the vvold.log file, you see error similar to: VasaOp::ThrowFromSessionError [#47964]: ===> FINAL FAILURE setPEContext, error (INVALID_ARGUMENT / failed to invoke operation: setPEContext[com.emc.cmp.osls.api.base.InstanceOps.checkPropertyValue():269 C:ERROR_CLASS_SOFTWARE F:ERROR_FAMILY_INVALID_PARAMETER X:ERROR_FLAG_LOGICAL Property inBandBindCapability is required and cannot be null.] / ) VP (VmaxVp) Container (VmaxVp) timeElapsed=19 msecs (#outstanding 0) error vvold[FFDE4B70] [Originator@6876 sub=Default] VendorProviderMgr::SetPEContext: Could not SetPEContext to VP VmaxVp (#failed 1): failed to invoke operation Applying HostProfile, initially assigns a randomly generated iSCSI initiator name and then renames it to the user defined name. This might cause some EMC targets to not recognize the initiator. When you execute multiple enumerate queries on VMware Ethernet port class using the CBEnumInstances method, servers running on an ESXi 6.0 might notice an error message similar to the following: CIM error: enumInstances Class not found This issue occurs when the management software fails to retrieve information provided by VMware_EthernetPort()class. When the issue occurs, query on memstats might display the following error message: MemStatsTraverseGroups: VSI_GetInstanceListAlloc failure: Not found. Patch Download and Installation The typical way to apply patches to ESXi hosts is through the VMware vSphere Update Manager. For details, see the Installing and Administering VMware vSphere Update Manager. ESXi hosts can be updated by manually downloading the patch ZIP file from the VMware download page and installing the VIB by using the esxcli software vib command. Additionally, the system can be updated using the image profile and the esxcli software profile command. For more information, see the vSphere Command-Line Interface Concepts and Example Guide and the vSphere Upgrade Guide.