Ever had the challenge that you had conifgured the MPIO settings in Linux as per the best practices and it still does not work on both network cards?

Meaning one is active and the other is not. Despite all the other efforts like,

 

  • creating the ifaceX file
  • changes to sysctl.conf and iscsid.conf
  • both Ethernet cards are up and running (ifconfig)
  • an arping from both ip-addresses to the iscsi target portal

 

Then most likely you have been struck by the ARP Flux in linux.

The following describes how to address the ARP Flux:

(using two network card eth1 and eth2, created iFace1 and iFace2)

 

# multipath -ll /dev/mapper/Oradata1

Oradata1 (2a0eac102069286256c9ce900baec26db) dm-10 Nimble,Server

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  `- 11:0:0:0 sdg 8:96  active ready running

 

This basically means that you are missing two settings in the /etc/sysctl.conf file.

 

     net.ipv4.conf.eth1.rp_filter=0

     net.ipv4.conf.eth2.rp_filter=0


(note: you will configure the Ethernet cards not the iFace you require for iSCSI!).

 

Reboot the server and run the multipath command again, in this example:

# multipath -ll /dev/mapper/Oradata1

Oradata1 (2a0eac102069286256c9ce900baec26db) dm-9 Nimble,Server

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 28:0:0:0 sdh 8:112 active ready running

  `- 29:0:0:0 sdm 8:192 active ready running

 

There might still be a slight chance that it shows with only one connection, if so please do the following:

 

iscsiadm –m discovery –t sendtargets –p 10.206.10.250:3260 –i iFace1

iscsiadm –m discovery –t sendtargets –p 10.206.10.250:3260 –i iFace2

iscsiadm –m node –u

iscsiadm –m node –l

multipath –ll

 

And your volumes are now connected on both paths.