Deep Caches Can Unstir the I/O Blender

- written by Patrick Kelsey, Dataram Storage Blog Team Member

“I/O blender” is a colorful term that was coined to describe a phenomenon that is often seen in virtualized environments, where you have many virtual servers sharing a single storage array.  In such environments, the many sequential or semi-random I/O workloads generated by each virtual server collectively appear to the storage array as a heavily random workload, often dominated by writes.  The storage array has a limited ability to maintain performance in the face of a heavily random write-dominated workload, especially in the mid-range, where the amount of cache available on the array controller is typically in the single-digit gigabytes.

Inserting a high-capacity caching appliance like our XcelaSAN into the SAN between the virtualized servers and the storage array greatly improves the situation.  With XcelaSAN you now have a quarter terabyte or more of cache absorbing that heavily random write stream at DRAM speeds, followed by deep sorting and careful feeding of the backend array with sequential data. IOPS and latency performance improves, and previously over provisioned arrays can be more fully utilized.

This entry was posted in Storage Posts. Bookmark the permalink.

Comments are closed.