To meaningfully describe the performance characteristics of any storage device, it is necessary to specify a minimum of three metrics simultaneously: IOPS, response time, and (application) workload. Absent simultaneous specifications of response-time and workload, IOPS are essentially meaningless. In isolation, IOPS can be considered analogous to "revolutions per minute" of an automobile engine i.e. an engine capable of spinning at 10,000 RPMs with its transmission in neutral does not convey anything of value, however an engine capable of developing specified torque and horsepower at a given number of RPMs fully describes the capabilities of the engine.
The specific number of IOPS possible in any system configuration will vary greatly, depending upon the variables the tester enters into the program, including the balance of read and write operations, the mix of sequential and random access patterns, the number of worker threads and queue depth, as well as the data block sizes.[1] There are other factors which can also affect the IOPS results including the system setup, storage drivers, OS background operations etc. Also, when testing SSDs in particular, there are preconditioning considerations that must be taken into account.[3]
Performance characteristics
The most common performance characteristics measured are sequential and random operations. Sequential operations access locations on the storage device in a contiguous manner and are generally associated with large data transfer sizes, e.g. ≥ 128 kB. Random operations access locations on the storage device in a non-contiguous manner and are generally associated with small data transfer sizes, e.g. 4 kB.
The most common performance characteristics are as follows:
Measurement
Description
Total IOPS
Total number of I/O operations per second (when performing a mix of read and write tests)
Random Read IOPS
Average number of random read I/O operations per second
Random Write IOPS
Average number of random write I/O operations per second
Sequential Read IOPS
Average number of sequential read I/O operations per second
Sequential Write IOPS
Average number of sequential write I/O operations per second
For HDDs and similar electromechanical storage devices, the random IOPS numbers are primarily dependent upon the storage device's random seek time, whereas, for SSDs and similar solid state storage devices, the random IOPS numbers are primarily dependent upon the storage device's internal controller and memory interface speeds. On both types of storage devices, the sequential IOPS numbers (especially when using a large block size) typically indicate the maximum sustained bandwidth that the storage device can handle.[1] Often sequential IOPS are reported as a simple Megabytes per second number as follows:
Some HDDs/SSDs will improve in performance as the number of outstanding IOs (i.e. queue depth) increases. This is usually the result of more advanced controller logic on the drive performing command queuing and reordering commonly called either Tagged Command Queuing (TCQ) or Native Command Queuing (NCQ). Many consumer SATA HDDs either cannot do this, or their implementation is so poor that no performance benefit can be seen.[citation needed] Enterprise class SATA HDDs, such as the Western Digital Raptor and Seagate Barracuda NL will improve by nearly 100% with deep queues.[4] High-end SCSI drives more commonly found in servers, generally show much greater improvement, with the Seagate Savvio exceeding 400 IOPS—more than doubling its performance.[citation needed]
While traditional HDDs have about the same IOPS for read and write operations, many NAND flash-based SSDs and USB sticks are much slower writing than reading due to the inability to rewrite directly into a previously written location forcing a procedure called garbage collection.[5][6][7] This has caused hardware test sites to start to provide independently measured results when testing IOPS performance.
Flash SSDs, such as the Intel X25-E (released 2010), have much higher IOPS than traditional HDD. In a test done by Xssist, using Iometer, 4 KB random transfers, 70/30 read/write ratio, queue depth 4, the IOPS delivered by the Intel X25-E 64 GB G1 started around 10000 IOPs, and dropped sharply after 8 minutes to 4000 IOPS, and continued to decrease gradually for the next 42 minutes. IOPS vary between 3000 and 4000 from approximately 50 minutes and onwards, for the rest of the 8+ hours the test ran.[8] Even with the drop in random IOPS after the 50th minute, the X25-E still has much higher IOPS compared to traditional hard disk drives. Some SSDs, including the OCZ RevoDrive 3 x2 PCIe using the SandForce controller, have shown much higher sustained write performance that more closely matches the read speed.[9] For example, a typical operating system has many small files (such as DLLs ≤ 128 kB), so SSD is more suitable for system drive.[10]
Examples
Mechanical hard drives
Block size used when testing significantly affects the number of IOPS performed by a given drive. See below for some typical performance figures:[11]
Intel's data sheet[15] claims 3,300 IOPS and 35,000 IOPS for writes and reads, respectively. 5,000 IOPS are measured for a mix. Intel X25-E G1 has around 3 times higher IOPS compared to the Intel X25-M G2.[16]
SandForce-1200 based SSD drives with enhanced firmware, states up to 50,000 IOPS, but benchmarking shows for this particular drive ~25,000 IOPS for random read and ~15,000 IOPS for random write.[17]
4 kB aligned random I/O with four workers at QD4 (effectively QD16),[26] 1 TB model 14,000 read IOPS, 50,000 write IOPS at QD1 330,000 read IOPS, 330,000 write IOPS on 500 GB model 300,000 read IOPS, 330,000 write IOPS on 250 GB model Up to[neutrality is disputed] 3.2 GB/s sequential read, 1.9 GB/s sequential write[25]
4kB aligned random I/O with four workers at QD4 (effectively QD16),[26] 1 TB and 2 TB models 14,000 read IOPS, 50,000 write IOPS at QD1 330,000 read IOPS, 330,000 write IOPS on 512 GB model Up to[neutrality is disputed] 3.5 GB/s sequential read, 2.1 GB/s sequential write[25]
1,261,145 SPECsfs2008 nfsv3 IOPs using 1,440 15k disks, across 60 shelves, with virtual storage tiering.[30][unreliable source?]
NFS, SMB, FC, FCoE, iSCSI
SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms.