Virtualization Technology News and Information
Recovering Disk Space from Thin-Provisioned Disks

Recovering Disk Space from Thin-Provisioned Disks

A Contributed Article by Robert Nolan, president and CEO of Raxco Software.

In a March 17, 2001 article in Enterprise Storage Forum entitled "Is it Time to Add More Storage," [1] Ken Hess writes that thin -provisioning is "another good ‘in theory' practice that works everywhere except the production data center."  Hess goes on to say that overprovisioning is rampant in data centers and the reason is thin-provisioning.  In a related article in Windows IT Pro on May 19, 2011, Alan Sugano provides a similar reference in his article "VMware ESX Disk Configuration Options." [2] Mr. Sugano notes that thin-provisioning saves space, but you take a performance hit and increase the probability of running out of space on the ESX storage group.

Hess asserts thin-provisioning is a bad idea for virtual disks leading to storage waste and outages. He thinks thin-provisioning on the SAN side is a good idea leading to less storage waste and faster expansion as user demands on the storage pool increase.  Sugano suggests that if you are going to thin-provision virtual disks, you do so only for base images and not data drives. He notes that once users start to write to those thin-provisioned data drives, you might quickly be out of space. 

The scenarios Hess and Sugano describe are happening every day in the real world and causing a lot of angst for storage and system administrators alike. If you Google the term "thin-provision zero-fill," you can find dozens of inquires like this one from a VMware community site It appears this site experienced storage blowout on thin-provisioned disks and is looking for a way to get its over provisioned space back.

Up to now, the de-facto response to recovering the blown out space in a thin-provisioned virtual disk was the free S-Delete utility from the Systernal website maintained by Microsoft. However, S-Delete has a number of downsides that make it cumbersome to use:

  1. S-Delete works by creating a large file that consumes all the free space on the disk. This means all applications and services running on that disk need to be shut down until S-Delete completes.
  2. S-Delete can only be invoked using a command line syntax
  3. S-Delete is relatively slow and resource intensive

Now storage and system administrators have another option for zero-filling free space that offers the additional benefits of lowering hypervisor overhead, improving disk latency and throughput, and reducing the size of virtual disks.

PerfectDisk from Raxco Software has a free zero-fill free space capability that is quick and easy. Unlike S-Delete, PerfectDisk:

  1. Runs in the background and does not consume all the free space on the disk. Applications and services can remain in use while PerfectDisk zero-fills free space.
  2. Can schedule a zero-fill pass to run standalone, or it can follow a defragmentation pass that will also consolidate the free space into the largest possible chunk.
  3. Zero-fill can be initiated from the PerfectDisk GUI or through the command line.
  4. PerfectDisk is faster and uses fewer resources than S-Delete.

The additional performance benefits come from the defragmentation of files and consolidation of free space in the Windows guest. When the Windows file system fragments a file, the address of each fragment is kept in that file's record in the Master File Table (MFT) on the virtual disk. When the file is
accessed, each fragment is conveyed to the storage/SAN controller as a SCSI command.  A file in one piece requires one SCSI command to the controller; the same file in 200 fragments requires 200 SCSI commands.

A single SCSI command to the controller may be mapped to the disk array in as few as one, or possibly several physical accesses, depending on the array software.  When the same file requires 200 SCSI commands, the mapping can require hundreds of physical accesses, since each SCSI command is mapped separately. The sheer number of SCSI commands represents overhead for the virtualization layer. The increased physical accesses to the disks impact disk latency and throughput.  Testing with VMware demonstrated that defragmentation of Windows guest systems reduces I/O contention and improves latency, and throughput. This can be accomplished at a fraction of the cost of new disks or fiber channel.  The complete test report can be found at

Besides thin-provisioned virtual disks, there are two other scenarios where zero-filling free space provides some benefit. When going P2V, you can save storage requirements by defragmenting the physical disk and consolidating its free space, followed by zero-filling the free space. The resultant virtual
disk will be smaller than the original physical disk. The second scenario is when a SAN with "zero-detection" is in use. Zero-detect SANs, like those from HP/3PAR and Hitachi, will recover space when they detect a zero-filled block. The PerfectDisk zero-fill can be used on physical servers to reclaim unused space where the SAN cannot determine if the data written to a block is valid or not. 

Whether or not thin-provisioned virtual disks are a good idea or not is not for us to say. The fact is, thin-on-thin is being used and inevitably there will be the occasional blowout and storage will be overcommitted. If you are lucky, it will be tens of gigabytes. If not, you might be looking at how to reclaim several terabytes of overcommitted storage. If you are a storage administrator with thin-provisioned virtual disks, you might want to bookmark this article.


About the Author 

Robert Nolan is the president and CEO of Raxco Software which is a Microsoft Gold ISV and a VMware Elite Technical Alliance Partner. He has over 25 years experience with system management software on multiple platforms. Mr. Nolan's career includes positions in software development, product management, sales and corporate management. He frequently speaks at Windows User Groups, VMUGs and other venues about virtualization performance.



Published Monday, June 27, 2011 5:00 AM by David Marshall
dkvello - (Author's Link) - June 27, 2011 7:37 AM

I have one question about defrag and also the effect of using zero-fill on CBT ?

Won't all this show as changed blocks and wreak havoc on vStorage/CBT based backup ?

Robert Nolan - (Author's Link) - June 27, 2011 1:46 PM

This article is about recovering space from thin-provisioned VMs where some event blows out the storage and you want to reclaim the space. Take the case where there is a thin-provisioned 500GB VM that usually has 50GB of files in use,  an app unexpectedly produces a 400GB log file allocating 400GB of storage. The CBT gets a pretty good workout when this happens too, even though you may not need the log file.   Once you delete the log file the 400GB of storage remains allocated. As far as the SAN is concerned this space is still in use and not available to other VMs.  

Zero-filling the free space will generate CBT activity reflecting the changes. If your storage has zero detection capability  you can recover the 400GB of storage returning it to the storage pool for reuse.  There is a trade-off here in terms of CBT overhead versus benefit of the recovered storage?

Recovering Disk Space from Thin-Provisioned Disks : … - (Author's Link) - June 27, 2011 8:17 PM
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<June 2011>