Category: Microsoft Hyper-V

vSphere Disk Write Process

Recently I have seen quite a bit of question and discussion of how vSphere handles disk writes from the guest OS and performance questions about limiting the writes.

The question started off innocently as “How can we gaurantee vSphere disk writes happen and are not cached?”

First lets analyze where this may have started SQL Solutions posted this article, in which they tested SQL server on a VMware virtual machine and tested the write caching of the DB and the virtual machine.  The fatal flaw with this test is that they used VMplayer which is NOT vSphere and works nothing like vSphere does with disk writes.  VMplayer (as well as Workstation and Fusion) does cache results (which is what they have shown), where as vSphere behaves completely differently.

vSphere writes are handled differently since vSphere is an Enterprise server class software product.  Each and every write that a guest does is not confirmed in the operating system until it has been confirmed by the underlying storage array.  vSphere since ESX 3.x has behaved like this.

This is true for NFS and SCSI based storage.

Does that mean that the data actually made it to a spindle?  NO

vSphere only knows that the storage array has confirmed the write has happened, it could still be in the cache of the storage array, this could be the cache of the RAID controller or the SAN storage array.  Now most of these enterprise storage class arrays have built in batteries to write all items in the cache to disk system in the event of a power failure; but that is out of the vSphere control.

So how do you maintain storage consistancy and data integrity?  The simple answer is use enterprise storage and ensure that the battery backed cache or UPS for the array can either outlast the power outage (i.e. generator power up or utilities being restored).

Options like FUA (Force Unit Access) are not feasible since modern HBA’s, RAID controllers, SAN’s and file systems strip this control bit from the IO, also to use FUA for this each and every I write IO would require the FUA bit to be set.

Increasing the heap size for VMFS

ESX 3.0 thru ESX 4.0 limit the heap size for VMFS to 16MB which allows for just 4TB of VMFS files to be open at a time.  Once this threshold is crossed the ESX host start to behave eradically, possible crashing the VMs that were powered on at the time.  To avoid this problem you can increase the heap size to 128MB which will allow for 32TB of storage open on a single ESX host.

The vmKernel.log will display the following if your host is getting close to the limitation:

WARNING: Heap: 1370: Heap_Align(vmfs3, 4096/4096 bytes, 4 align) failed. caller: 0x8fdbd0
WARNING: Heap: 1266: Heap vmfs3: Maximum allowed growth (24) too small for size (8192)

To correct this problem:

  1. Login to vCenter or use the VI client to connect directly to the ESX host in question
  2. Click on the Configuration tab
  3. Click Advanced Settings
  4. Find and select VMFS3
  5. Update the VMFS2.MaxHeapSizeMB to 128
  6. Reboot the ESX host

ESX 4.1’s default is 80MB which will allow for 20TB to be open at once, so it is less likely to run into this issue.

Update: There is a VMware KB on this problem here: