All of lore.kernel.org
 help / color / mirror / Atom feed
* [dm-devel] multipath with SAS and FC.
@ 2021-03-01 13:44 bchatelain
  2021-03-01 16:39 ` Roger Heflin
  2021-03-02 10:15 ` Xose Vazquez Perez
  0 siblings, 2 replies; 5+ messages in thread
From: bchatelain @ 2021-03-01 13:44 UTC (permalink / raw)
  To: dm-devel

Hello,

I try to use multipath with SAS disk, transported by Fiber Channel, on
Dell Compellent.
My volume is detected on 2x R440 PowerEdge hosted by Centos 8 and
orchestrated by oVirt.


Problematic :

On my two ovirt node with the same configuration and hardware
specifications,
I give this same behavor, one of my two link, is flapping ACTIVE to FAILED

Something like this :
# multipath -ll
36000d31003d5c2000000000000000010 dm-3 COMPELNT,Compellent Vol
size=1.5T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=25 status=active
   |- 1:0:0:2 sdb 8:16 active ready running
   `- 1:0:1:2 sdc 8:32 failed ready running   -- looping failed/ready


Some interesting stuff :

# multipathd show config : Full @ https://pastebin.fr/85965

blacklist {
···
     device {
         vendor "COMPELNT"
         product "Compellent Vol"
         path_grouping_policy "multibus"
         no_path_retry "queue"
     }
···
}

 LogZ : Full @ https://pastebin.fr/85968
Feb 25 11:48:24 isildur-adm kernel: device-mapper: multipath: 253:3: Reinstating path 8:32.
Feb 25 11:48:24 isildur-adm kernel: sd 1:0:1:2: alua: port group f01c state S non-preferred supports toluSNA
Feb 25 11:48:24 isildur-adm kernel: device-mapper: multipath: 253:3: Failing path 8:32.
Feb 25 11:48:25 isildur-adm multipathd[659460]: sdc: mark as failed


 # lsscsi -l
[0:2:0:0]    disk    DELL     PERC H330 Adp    4.30  /dev/sda
   state=running queue_depth=256 scsi_level=6 type=0 device_blocked=0 timeout=90
[1:0:0:2]    disk    COMPELNT Compellent Vol   0704  /dev/sdb
   state=running queue_depth=254 scsi_level=6 type=0 device_blocked=0 timeout=30
[1:0:1:2]    disk    COMPELNT Compellent Vol   0704  /dev/sdc
   state=running queue_depth=254 scsi_level=6 type=0 device_blocked=0 timeout=30


 # lsmod | grep fc
bnx2fc                110592  0
cnic                   69632  1 bnx2fc
libfcoe                77824  2 qedf,bnx2fc
libfc                 147456  3 qedf,bnx2fc,libfcoe
scsi_transport_fc      69632  3 qedf,libfc,bnx2fc


 # lsmod | grep sas
mpt3sas  303104 4
raid_class  16384 1 mpt3sas
megaraid_sas  172032 2
scsi_transport_sas  45056 1mpt3sas



I've made a misconfiguration ?
Is It possible to use SAS over FC ?

Thank you.


Regards,
Benoit Chatelain.


--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-devel] multipath with SAS and FC.
  2021-03-01 13:44 [dm-devel] multipath with SAS and FC bchatelain
@ 2021-03-01 16:39 ` Roger Heflin
  2021-03-02 10:15 ` Xose Vazquez Perez
  1 sibling, 0 replies; 5+ messages in thread
From: Roger Heflin @ 2021-03-01 16:39 UTC (permalink / raw)
  To: bchatelain; +Cc: device-mapper development

You can use active/active on FC and on SAS.

But some cheaper/simpler arrays/devices are not active/active, they
are active/passive and that requires a scsi_dh* module to handle the
non-active/non-responding path correctly.  I don't know if yours is
one of those.

You will need to find the vendors how-to doc on the required config
options as some options will not work on some devices, and in some
cases there may need to also be things set in the array itself to
allow certain things multipath needs to do.

On Mon, Mar 1, 2021 at 7:54 AM <bchatelain@cines.fr> wrote:
>
> Hello,
>
> I try to use multipath with SAS disk, transported by Fiber Channel, on
> Dell Compellent.
> My volume is detected on 2x R440 PowerEdge hosted by Centos 8 and
> orchestrated by oVirt.
>
>
> Problematic :
>
> On my two ovirt node with the same configuration and hardware
> specifications,
> I give this same behavor, one of my two link, is flapping ACTIVE to FAILED
>
> Something like this :
> # multipath -ll
> 36000d31003d5c2000000000000000010 dm-3 COMPELNT,Compellent Vol
> size=1.5T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='service-time 0' prio=25 status=active
>    |- 1:0:0:2 sdb 8:16 active ready running
>    `- 1:0:1:2 sdc 8:32 failed ready running   -- looping failed/ready
>
>
> Some interesting stuff :
>
> # multipathd show config : Full @ https://pastebin.fr/85965
>
> blacklist {
> ···
>      device {
>          vendor "COMPELNT"
>          product "Compellent Vol"
>          path_grouping_policy "multibus"
>          no_path_retry "queue"
>      }
> ···
> }
>
>  LogZ : Full @ https://pastebin.fr/85968
> Feb 25 11:48:24 isildur-adm kernel: device-mapper: multipath: 253:3: Reinstating path 8:32.
> Feb 25 11:48:24 isildur-adm kernel: sd 1:0:1:2: alua: port group f01c state S non-preferred supports toluSNA
> Feb 25 11:48:24 isildur-adm kernel: device-mapper: multipath: 253:3: Failing path 8:32.
> Feb 25 11:48:25 isildur-adm multipathd[659460]: sdc: mark as failed
>
>
>  # lsscsi -l
> [0:2:0:0]    disk    DELL     PERC H330 Adp    4.30  /dev/sda
>    state=running queue_depth=256 scsi_level=6 type=0 device_blocked=0 timeout=90
> [1:0:0:2]    disk    COMPELNT Compellent Vol   0704  /dev/sdb
>    state=running queue_depth=254 scsi_level=6 type=0 device_blocked=0 timeout=30
> [1:0:1:2]    disk    COMPELNT Compellent Vol   0704  /dev/sdc
>    state=running queue_depth=254 scsi_level=6 type=0 device_blocked=0 timeout=30
>
>
>  # lsmod | grep fc
> bnx2fc                110592  0
> cnic                   69632  1 bnx2fc
> libfcoe                77824  2 qedf,bnx2fc
> libfc                 147456  3 qedf,bnx2fc,libfcoe
> scsi_transport_fc      69632  3 qedf,libfc,bnx2fc
>
>
>  # lsmod | grep sas
> mpt3sas  303104 4
> raid_class  16384 1 mpt3sas
> megaraid_sas  172032 2
> scsi_transport_sas  45056 1mpt3sas
>
>
>
> I've made a misconfiguration ?
> Is It possible to use SAS over FC ?
>
> Thank you.
>
>
> Regards,
> Benoit Chatelain.
>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://listman.redhat.com/mailman/listinfo/dm-devel


--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-devel] multipath with SAS and FC.
  2021-03-01 13:44 [dm-devel] multipath with SAS and FC bchatelain
  2021-03-01 16:39 ` Roger Heflin
@ 2021-03-02 10:15 ` Xose Vazquez Perez
  2021-03-02 15:41   ` bchatelain
  1 sibling, 1 reply; 5+ messages in thread
From: Xose Vazquez Perez @ 2021-03-02 10:15 UTC (permalink / raw)
  To: Benoit Chatelain, DM-DEVEL ML

On 3/1/21 2:44 PM, bchatelain@cines.fr wrote:

> I try to use multipath with SAS disk, transported by Fiber Channel, on
> Dell Compellent.
> My volume is detected on 2x R440 PowerEdge hosted by Centos 8 and
> orchestrated by oVirt.
> [...]
> # multipath -ll
> 36000d31003d5c2000000000000000010 dm-3 COMPELNT,Compellent Vol
> size=1.5T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='service-time 0' prio=25 status=active
>     |- 1:0:0:2 sdb 8:16 active ready running
>     `- 1:0:1:2 sdc 8:32 failed ready running   -- looping failed/ready

The default mpt config for "COMPELNT/Compellent Vol" is already "multibus".
There is no need to add a custom config to /etc/multipath.conf.

Try:
# save old configs
mv /etc/multipath.conf /etc/_multipath.conf-$(date +%s)
cp -a /etc/multipath/wwids /etc/multipath/_wwids-$(date +%s)
# reconfig mp
mpathconf --enable --user_friendly_names n
multipath -W
systemctl enable multipathd.service
# recreate initrd, and reboot the system
dracut -f
init 6


If the default mode of "COMPELNT/Compellent Vol" arrays WERE CHANGED to ALUA,
(dmesg -T | grep -i alua). The /etc/multipath.conf file must contain:

devices {
	device {
		vendor "COMPELNT"
		product "Compellent Vol"
		path_grouping_policy "group_by_prio"
		prio "alua"
		failback "immediate"
		no_path_retry 30
	}
}

Follow the same steps, but before "dracut -f" add that config to /etc/multipath.conf

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-devel] multipath with SAS and FC.
  2021-03-02 10:15 ` Xose Vazquez Perez
@ 2021-03-02 15:41   ` bchatelain
  2021-03-03  1:24     ` Xose Vazquez Perez
  0 siblings, 1 reply; 5+ messages in thread
From: bchatelain @ 2021-03-02 15:41 UTC (permalink / raw)
  To: Xose Vazquez Perez; +Cc: dm-devel

It's work good.

I have add this line to device in multipath.conf :
  path_grouping_policy "group_by_prio"


I have something like this :

  #multipath -ll
36000d31003d5c2000000000000000010 dm-3 COMPELNT,Compellent Vol
size=1.5T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 1:0:0:2 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
  `- 1:0:1:2 sdc 8:32 active ready running

And no more logs in /var/log/messages.


Thank You.


Regards,
Benoit Chatelain.


----- Mail original -----
De: "Xose Vazquez Perez" <xose.vazquez@gmail.com>
À: "Benoit Chatelain" <bchatelain@cines.fr>, "dm-devel" <dm-devel@redhat.com>
Envoyé: Mardi 2 Mars 2021 11:15:19
Objet: Re: [dm-devel] multipath with SAS and FC.

On 3/1/21 2:44 PM, bchatelain@cines.fr wrote:

> I try to use multipath with SAS disk, transported by Fiber Channel, on
> Dell Compellent.
> My volume is detected on 2x R440 PowerEdge hosted by Centos 8 and
> orchestrated by oVirt.
> [...]
> # multipath -ll
> 36000d31003d5c2000000000000000010 dm-3 COMPELNT,Compellent Vol
> size=1.5T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
> `-+- policy='service-time 0' prio=25 status=active
>     |- 1:0:0:2 sdb 8:16 active ready running
>     `- 1:0:1:2 sdc 8:32 failed ready running   -- looping failed/ready

The default mpt config for "COMPELNT/Compellent Vol" is already "multibus".
There is no need to add a custom config to /etc/multipath.conf.

Try:
# save old configs
mv /etc/multipath.conf /etc/_multipath.conf-$(date +%s)
cp -a /etc/multipath/wwids /etc/multipath/_wwids-$(date +%s)
# reconfig mp
mpathconf --enable --user_friendly_names n
multipath -W
systemctl enable multipathd.service
# recreate initrd, and reboot the system
dracut -f
init 6


If the default mode of "COMPELNT/Compellent Vol" arrays WERE CHANGED to ALUA,
(dmesg -T | grep -i alua). The /etc/multipath.conf file must contain:

devices {
	device {
		vendor "COMPELNT"
		product "Compellent Vol"
		path_grouping_policy "group_by_prio"
		prio "alua"
		failback "immediate"
		no_path_retry 30
	}
}

Follow the same steps, but before "dracut -f" add that config to /etc/multipath.conf


--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dm-devel] multipath with SAS and FC.
  2021-03-02 15:41   ` bchatelain
@ 2021-03-03  1:24     ` Xose Vazquez Perez
  0 siblings, 0 replies; 5+ messages in thread
From: Xose Vazquez Perez @ 2021-03-03  1:24 UTC (permalink / raw)
  To: Benoit Chatelain, DM-DEVEL ML

On 3/2/21 4:41 PM, bchatelain@cines.fr wrote:

> It's work good.
> 
> I have add this line to device in multipath.conf :
>    path_grouping_policy "group_by_prio

That's not enough to run properly in active/passive mode with ALUA.
Mainly, because the default value of failback is "manual".

This is a minimal config to work flawlessly:
devices {
	device {
		vendor "COMPELNT"
		product "Compellent Vol"
		path_grouping_policy "group_by_prio"
		prio "alua"
		failback "immediate"
		no_path_retry 30
	}
}

--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-03-03  1:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-01 13:44 [dm-devel] multipath with SAS and FC bchatelain
2021-03-01 16:39 ` Roger Heflin
2021-03-02 10:15 ` Xose Vazquez Perez
2021-03-02 15:41   ` bchatelain
2021-03-03  1:24     ` Xose Vazquez Perez

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.