AnsweredAssumed Answered

How to improve low sequential read performance for large files?

Question asked by David Baril on Dec 14, 2016
Latest reply on Apr 7, 2017 by David Baril

I was running some performance tests on a small CS/1000 with a well tuned Centos 7.2 system running under VMware, using an external iSCSI LUN from the Nimble CS/1000.

 

I was very pleased with write performance of both small and large files (Nimble's forte), but was disappointed with the moderate levels of sequential read performance for large, multi-gigabyte files.

 

For example, when writing a 10 GiB file to a tuned XFS file system, I am achieving over 550 MB/sec rates ... using uncompressable data. When I read the same file back (after flushing the Linux file cache buffers), I achieve only 145 to 180 MB/sec.

 

I'm using Linux block-layer settings on the "sd" entries of max_io_kb of 1024, read_ahead_kb of 4096, nr_requests of 128, with a queue depth of 64.  The same settings are on the multipath pseudo device. 

 

Monitoring the performance as the test is running, I can confirm that 1 MB reads are being done at the iSCSI layer.  The system uses a dual 10GbE NICs, with dual subnets into the two 10GbE ports on the Nimble storage.

 

I understand that Nimble CASL architecture identifies large sequential reads and performs them directly from the disks.  However, with an 18+3 CASL layout with a total of 21 disks, I would expect a higher sequential read performance than 9-10 MB/sec per physical disk.

 

The Nimble volume is using a 4kb Nimble block size, with a 4kb XFS page size, with proper alignment.

 

What large file sequential read performance should be expected on a CS/1000 ?

 

Thanks for your help.

 

Dave B.

Outcomes