NetApp has released a new whitepaper titled: VMware vSphere 4.1 Storage Performance: Measuring FCoE, FC, iSCSI, and NFS Protocols. The Paper which contains 25 pages covers the relative I/O performance available from SAN and NAS storage protocols with vSphere 4.1 and a NetApp FAS array. The paper results obtained from shared and non-shared datastores, has VAAI enabled and measures the gains provided by the Paravirtual SCSI adapter. The paper covers both large numbers of VMs accessing shared datastores and single VMs generating high levels of concurrent I/O using a nonshared datastore.
The tests were conducted on a 8-node vSphere 4.1 cluster. Each host was powered by a Fujitsu Primergy RX200 with 2 Quad-core Intel Xeon E5507 Nehalem CPUs & 48GBs of memory, Qlogic CNA & HBAs, and Intel NICs. The I/O load was generated by 128 VMs, each running IOMeter. The storage array comprised of a NetApp FAS 6210 running Data Ontap 8.0.1RC2, configured with 190 15k SAS drives, connected to a pair of Cisco Nexus 5020 unified fabric network switches via NetApp’s Unified Connect CNA.
The paper covers the following topics:
- Executive Summary
- Shared Datastores, 75/25 random read/write workload with a 4K block size, consisting of relative throughput and latency comparison
- Shared Datastores, 75/25 random read/write workload with a 8K block size, consisting of relative throughput and latency comparison
- High-Performance nonshared datastore, 60/40 random read/write workload with a 8K block size, consisting of relative throughput and latency comparison using LSI Logic and Para Virtualized SCSI, comparing throughput, latency and CPU utilization between LSI Logic and Para Virtualized SCSI.
- Test Design and Configuration, detailing the installation of configuration of the used environment.
- References, acknowledgements and feedback
Conclusion from NetApp:
We (NetApp) believe these tests demonstrate that the combination of vSphere 4.1 and the NetApp unified storage platform provides enterprise-class performance in a variety of typical production scenarios with any of the protocols supported by VMware and NetApp.
The large number of protocols supported clearly provides the ultimate in flexibility for our customers to move forward with emerging data center standards like Data Center Ethernet while maintaining the viability of their existing vSphere environments. Additionally, we found that using the PVSCSI driver in our high-I/O environment allowed us to generate performance comparable to that of LSI Logic while using significantly fewer VM CPU resources.
Finally, we found that the differences in vSphere host CPU resources consumed by the different protocols during the shared datastore testing were generally in the range of 3% or less and deemed statistically irrelevant. Therefore, the performance engineering teams at VMware and NetApp agreed to omit comparative charts from this report.