AnsweredAssumed Answered

Linux iscsi.conf nr_sessions.  Why 4 or 2?

Question asked by David Baril on Dec 13, 2016
Latest reply on Dec 22, 2016 by Shiva Krishna Merla

Hello all,

 

Background:

Nimble has several best practice -oriented documents that provide some instructions to over-ride the iSCSI configuration defaults under Linux. Most of these documents are somewhat stale, as they do not include information about RHEL/CentOS version 7.x, which is based on the Linux 3.10 kernel.  The documents that I am referring to are "BEST PRACTICES GUIDE, Nimble Storage for Red Hat Enterprise Linux

6 & Oracle Linux 6", "Deployment Considerations Guide, Nimble Storage Deployment Considerations for Linux on iSCSI", "NFS Gateway Deployment Considerations", "TECHNICAL WHITE PAPER, Nimble Storage for Splunk on Oracle Linux & RHEL 6", and "TECHNICAL REPORT, Nimble Storage Setup Guide for Single Instance Oracle 11gR2 on Oracle Linux 6.4".  There are likely other documents that include Linux iSCSI configuration for Nimble, but these are the ones that I found so far.

 

Note:  If you do a search in Nimble Infosight, documentation section for "nr_sessions", you get NO hits. I had to use Google searches to find most of these documents.

 

These documents provide some guidance in configuring the Linux "iscsi.conf" file (which is now named iscsid.conf under RHEL/CentOS 7.x), and discusses over-riding several of the configuration file defaults.  Some of these suggested changes have a description that discussed the rationale behind the change, others setting have no rationale discussed.

 

The suggested settings also vary by document, without any rationale for the basis of the different settings. The lack of rationale and the inconsistent recommendations across the Nimble papers leads to confusion as to which set of parameters are "better".  I also suggest that some of the recommended settings are sub-optimal, and can lead to under-exploiting the high performance of Nimble storage.

 

For this posting, I would like to focus on the iscsi.conf configuration variable "session.nr_sessions", which controls the number of iSCSI sessions created per host-initiator:Nimble-target-port pair.  All iscsi sessions created for the same host-initiator:Nimble-target-port pair share the same failure risks.  If you want higher availability, you would try to configure "dual fabrics" for iscsi, which ideally would involve dual NICs on the host, resulting in dual host iscsi initators (called an iscsi "interface") cabled to dual external network switches, connected to two different Nimble ports, and overall using dual subnets to improve the network isolation. I will admit that this is the "ideal", and many Nimble customers do not have such a topology.

 

So we have a configuration with two separate parallel paths from the Linux host to the Nimble storage.  Using the iscsi.conf default value of "1" for the 'session.nr_sessions" parameter, we get one iSCSI session per host-initiator:Nimble-target-port pair (as expected).  With Linux dm-multipath properly configured, you have a total of 2 active iscsi paths to the Nimble volume, and a robust high-availablity configuration.  If you properly configure the remainder of the IO stack, you can drive very high levels of IOPs and/or large IO bandwidth across the dual paths, and scale performance beyond the level of a single 10GbE connection .... if the storage side and the host side can drive those levels of  performance before bottlenecking.

 

Why then, does Nimble recommend 2 sessions or 4 sessions PER host-initiator:Nimble-target-port pair?  With the iscsi "session.nr_session" parameter set to 4 in this example, there will be 4 sessions per host-initiator:Nimble-target-port pair, or 8 iscsi sessions total, and Linux dm-multipath will assemble what looks like an 8-path multipath device, that in reality share only two physical paths.

 

Why multiplex 4 sessions on a single host-initiator:Nimble-target-port pair?  This is more complex, and under load can generate some unneeded congestion between the 4 sessions sharing the physical path. This recommendation seems to infer that there is some resource constraint or bottleneck that prevents full utilization of the physical path between the host-initiator:Nimble-target-port pair, that is remediated by using multiple-sessions per physical path ...from the same host.

 

If there is a legitimate per-iscsi-session resource constraint (after the other settings are properly configured), it would be useful to be aware of it.  Perhaps there are methods available with newer hardware, NICs, and software IO stack tuning that can help address these inferred restrictions.

 

For example, Nimble suggests using the vmxnet3 para-virtualized driver under VMware for a Linux VM.  This driver implements and enables multi-queue receives, but does not disable irqbalance, nor set per-queue affinities.  The vmxnet3 driver also implements multi-queue transmits, but does NOT enable the feature, nor set irq affinities.  Not surprisingly, while the "default" VMware Linux vmxnet3 driver is very good, properly enabling multi-queue capabilities, assigning irq affinities for the queues, and stopping irqbalance from randomizing them further improves networking performance, and thereby iSCSI performance, and Nimble performance .... to a single host.

 

So ... what is the rationale for recommending 4 iscsi sessions per host-initiator:Nimble-target-port pair, especially when using a dual-NIC, dual fabric topology?

 

Thank you for your help.

 

Dave B

Outcomes