...
This is regarding VAAI UNMAP plug-in issue, delete status is showing unsupported: [root@ESXi:/] esxcli storage core device vaai status get -d naa. VAAI Plugin Name: VMW_VAAIP_CX ATS Status: supported Clone Status: supported Zero Status: supported Delete Status: unsupported The UNMAP feature became available for Unity arrays from VPLEX 6.0 Service Pack 1. So, must confirm if the VPlex is running on the correct version code that supports UNMAP. But, when the VPlex is running on the version that supports the UNMAP feature for Unity, the UNMAP feature reports unsupported. The use of UNMAP is limited to Thin provisioned volumes. So, need to check if the the LUNs are thin provisioned or not: [root@ESXi:~] esxcli storage core device list -d naa. . . Thin Provisioning Status: unknown
The version running on VPlex doesn't support the 'unmap' feature for Unity. ORIf the VPlex is running on the version that supports the UNMAP feature for Unity, the cause is that the LUNs are thick provisioned or the LUNs are thin provisioned on Unity, but thin-enabled-feature on VPlex virtual-volume is disabled.
If the VPlex is not running on the version that supports the UNMAP feature for Unity. Customer to upgrade VPlex to 6.0 SP 1, which supports thin UNMAP.Else,Checks on VPlex :1. verify if the storage-volume provisioned to VPlex is thin capable : VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll Name VPD83 ID Capacity Use Vendor IO Status Type Thin Provision Thin --------------------------------- ------------------------ -------- --------- ------- ----------- ----------- Rebuild Type Capable --------------------------------- ------------------------ -------- --------- ------- ----------- ----------- ------- --------- ------- VPD83T3:514f0c5b94e0002c VPD83T3:514f0c5b94e0002c 15G used Unity alive normal false legacy true 2. Verify if thin enabled feature on VPlex virtual-volume is enabled :VPlexcli:/clusters/cluster-1/virtual-volumes/testvolume_vol> llName Value-------------------------- ----------------------------------------block-count 3932160block-size 4Kcache-mode synchronouscapacity 15Gconsistency-group -expandable trueexpandable-capacity 0Bexpansion-method storage-volumeexpansion-status -health-indications []health-state oklocality localoperational-status okrecoverpoint-protection-at []recoverpoint-usage -scsi-release-delay 0service-status runningstorage-tier -supporting-device testvolumesystem-id testvolume_volthin-capable truethin-enabled disabled volume-type virtual-volumevpd-id VPD83T3:6000144000000010b021768279d4285d 3. Enable the thin feature on the virtual volume, if it's disabled:VPlexcli:/clusters/cluster-1/virtual-volumes/testvolume_vol>set thin-enabled 1Checks on ESXi : 4. Scan storage on ESXi server. It can be done both via command line and VCenter.[root@ESXi:/] esxcli storage core adapter rescan --all5.Run the storage core path command against the VPlex virtual-volume Naa id i :[root@ESXi:/] esxcli storage core path listfc.20000024ff548aa8:21000024ff548aa8-fc.5000144047b02176:50001442d0217600-naa.6000144000000010b021768279d4285dUID: fc.20000024ff548aa8:21000024ff548aa8-fc.5000144047b02176:50001442d0217600-naa.6000144000000010b021768279d4285dRuntime Name: vmhba2:C0:T3:L0Device: naa.6000144000000010b021768279d4285dDevice Display Name: EMC Fibre Channel Disk (naa.6000144000000010b021768279d4285d)Adapter: vmhba2Channel: 0Target: 3LUN: 0Plugin: NMPState: activeTransport: fcAdapter Identifier: fc.20000024ff548aa8:21000024ff548aa8Target Identifier: fc.5000144047b02176:50001442d0217600Adapter Transport Details: WWNN: 20:00:00:24:ff:54:8a:a8 WWPN: 21:00:00:24:ff:54:8a:a8Target Transport Details: WWNN: 50:00:14:40:47:b0:21:76 WWPN: 50:00:14:42:d0:21:76:00Maximum IO Size: 33553920Get Vplex virtual volume NAA6. Before creating partition or datastore on the volume, ESXi can not get Thin Provisioning status for that volume, thus it still reports that the feature is not supported: [root@ESXi:/] esxcli storage core device vaai status get -d naa.6000144000000010b021768279d4285dnaa.6000144000000010b021768279d4285dVAAI Plugin Name:ATS Status: supportedClone Status: supportedZero Status: supportedDelete Status: unsupported[root@ESXi:/] esxcli storage core device list -d naa.6000144000000010b021768279d4285dnaa.6000144000000010b021768279d4285dDisplay Name: EMC Fibre Channel Disk (naa.6000144000000010b021768279d4285d)Has Settable Display Name: trueSize: 15360Device Type: Direct-AccessMultipath Plugin: NMPDevfs Path: /vmfs/devices/disks/naa.6000144000000010b021768279d4285dVendor: EMCModel: InvistaRevision: 5520SCSI Level: 4Is Pseudo: falseStatus: onIs RDM Capable: trueIs Local: falseIs Removable: falseIs SSD: falseIs VVOL PE: falseIs Offline: falseIs Perennially Reserved: falseQueue Full Sample Size: 0Queue Full Threshold: 0Thin Provisioning Status: unknownAttached Filters:VAAI Status: supportedOther UIDs: vml.02000000006000144000000010b021768279d4285d496e76697374Is Shared Clusterwide: trueIs Local SAS Device: falseIs SAS: falseIs USB: falseIs Boot USB Device: falseIs Boot Device: falseDevice Max Queue Depth: 64No of outstanding IOs with competing worlds: 32Drive Type: unknownRAID Level: unknownNumber of Physical Drives: unknownProtection Enabled: falsePI Activated: falsePI Type: 0PI Protection Mask: NO PROTECTIONSupported Guard Types: NO GUARD SUPPORTDIX Enabled: falseDIX Guard Type: NO GUARD SUPPORTEmulated DIX/DIF Enabled: false7. Create datastore on the volume or manually create partition table for the volume, thin provision status will change to yes and unmap will 'change to supported'.[root@ESXi:/] esxcli storage core device list -d naa.6000144000000010b021768279d4285dIs VVOL PE: falseIs Offline: falseIs Perennially Reserved: falseQueue Full Sample Size: 0Queue Full Threshold: 0Thin Provisioning Status: yes[root@ESXi:/] esxcli storage core device vaai status get -d naa.6000144000000010b021768279d4285dnaa.6000144000000010b021768279d4285dVAAI Plugin Name:ATS Status: supportedClone Status: supportedZero Status: supportedDelete Status: supported8. After executing the above steps, if the ESXi still fails to update the unmap status, please check for the VMware KBA and investigate from ESXi side.