Linux-Block Archive on lore.kernel.org
 help / color / Atom feed
* Block device naming
@ 2019-05-16 12:26 Alibek Amaev
  2019-05-16 12:33 ` Hannes Reinecke
  0 siblings, 1 reply; 5+ messages in thread
From: Alibek Amaev @ 2019-05-16 12:26 UTC (permalink / raw)
  To: linux-block, linux-scsi

Hi!

I want to address the following problem:
On the system with hot-attached new storage volume, such as FC-switch
update configuration for connected FC-HBA on servers, linux kernel
reorder block devices and change names of block devices. Becouse
scsi-id, wwn-id and other is a symbol links to block device names than
on change block device name change path to device.
This causes the server to stop working.

For example, on server present ZFS pool with attached device by scsi-id
# zpool status
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 1h39m with 0 errors on Sun Oct  8 02:03:34 2017
config:

    NAME                                      STATE     READ WRITE CKSUM
    pool                                      ONLINE       0     0     0
      scsi-3600144f0c7a5bc61000058d3b96d001d  ONLINE       0     0     0

Before export new block device from storage to hba, scsi-id have next
path to device:
/dev/disk/by-id/scsi-3600144f0c7a5bc61000058d3b96d001d -> ../../sdd

When added new block device by FC-switch, FC-HBA kernel change block
device names:
/dev/disk/by-id/scsi-3600144f0c7a5bc61000058d3b96d001d -> ../../sdf

and ZFS can't access to device until reboot (partprobe, zpool online
-e pool scsi-3600144f0c7a5bc61000058d3b96d001d - may help or may not
help)

Is there any way to fix or change this behavior of the kernel?

It may be more reasonable to immediately assign an unique persistent
identifier of device and linking other identifiers with it?

Also I think this is not specific problem of ZFS. And can occur with other
file system modules.Moreover, I had previously encountered a similar
problem - NetAPP
storage attached to servers by FC and export multiple LUN - suddenly
decided to change the order of LUNs and Ext4 on servers is switch to
readonly mode because driver detect changes of magic number in
superblocks of partitions.

With regards, Alibek!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Block device naming
  2019-05-16 12:26 Block device naming Alibek Amaev
@ 2019-05-16 12:33 ` Hannes Reinecke
  2019-05-16 13:49   ` Alibek Amaev
  0 siblings, 1 reply; 5+ messages in thread
From: Hannes Reinecke @ 2019-05-16 12:33 UTC (permalink / raw)
  To: Alibek Amaev, linux-block, linux-scsi

On 5/16/19 2:26 PM, Alibek Amaev wrote:
> Hi!
> 
> I want to address the following problem:
> On the system with hot-attached new storage volume, such as FC-switch
> update configuration for connected FC-HBA on servers, linux kernel
> reorder block devices and change names of block devices. Becouse
> scsi-id, wwn-id and other is a symbol links to block device names than
> on change block device name change path to device.
> This causes the server to stop working.
> 
> For example, on server present ZFS pool with attached device by scsi-id
> # zpool status
>    pool: pool
>   state: ONLINE
>    scan: scrub repaired 0 in 1h39m with 0 errors on Sun Oct  8 02:03:34 2017
> config:
> 
>      NAME                                      STATE     READ WRITE CKSUM
>      pool                                      ONLINE       0     0     0
>        scsi-3600144f0c7a5bc61000058d3b96d001d  ONLINE       0     0     0
> 
> Before export new block device from storage to hba, scsi-id have next
> path to device:
> /dev/disk/by-id/scsi-3600144f0c7a5bc61000058d3b96d001d -> ../../sdd
> 
> When added new block device by FC-switch, FC-HBA kernel change block
> device names:
> /dev/disk/by-id/scsi-3600144f0c7a5bc61000058d3b96d001d -> ../../sdf
> 
> and ZFS can't access to device until reboot (partprobe, zpool online
> -e pool scsi-3600144f0c7a5bc61000058d3b96d001d - may help or may not
> help)
> 
Hmm. That really is curious; typically existing devices will not be 
reassigned. Especially not if they are in use by something.
And the FC layer is going into quite some lengths to prevent this from 
happening.
So this really looks more like an issue with how exactly this 'adding 
new block device' step was done.

> Is there any way to fix or change this behavior of the kernel?
> 
As I said, this typically does not happen.
It would need closer examination to figure out what really happened.

> It may be more reasonable to immediately assign an unique persistent
> identifier of device and linking other identifiers with it?
> 
Which is what we try ...

> Also I think this is not specific problem of ZFS. And can occur with other
> file system modules.Moreover, I had previously encountered a similar
> problem - NetAPP
> storage attached to servers by FC and export multiple LUN - suddenly
> decided to change the order of LUNs and Ext4 on servers is switch to
> readonly mode because driver detect changes of magic number in
> superblocks of partitions.
> 
Suddently changing the order of LUNs is _not_ what is supposed to 
happen. This really sounds more like an issue with NetApp.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke            Teamlead Storage & Networking
hare@suse.de                              +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Block device naming
  2019-05-16 12:33 ` Hannes Reinecke
@ 2019-05-16 13:49   ` Alibek Amaev
  2019-05-16 14:07     ` Hannes Reinecke
  0 siblings, 1 reply; 5+ messages in thread
From: Alibek Amaev @ 2019-05-16 13:49 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: linux-block, linux-scsi

I have more example from IRL:
In Aug 2018 I was start server with attached storages by FC from ZS3
and ZS5 (it is Oracle ZFS Storage Appliance, not NetApp and also
export space as LUN) server use one LUN from ZS5. And recently on
server stopped all IO on this exported LUN  and io-wait is grow, in
dmesg no any errors or any changes about FC, no errors in
/var/log/kern.log* /var/log/syslog.log*, no throttling, no edac errors
or other.
But before reboot I saw:
wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdc
I try to run partprobe or try to copy from this block device some data
to /dev/null by dd - operations wasn't finished IO is blocked
And after reboot i seen:
wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdd
And server is run ok.

Also I have LUN exported from storage in shared mode and it accesible
for all servers by FC. Currently this LUN not need, but now I doubt it
is possible to safely remove it...


On Thu, May 16, 2019 at 3:33 PM Hannes Reinecke <hare@suse.de> wrote:
>
> On 5/16/19 2:26 PM, Alibek Amaev wrote:
> > Hi!
> >
> > I want to address the following problem:
> > On the system with hot-attached new storage volume, such as FC-switch
> > update configuration for connected FC-HBA on servers, linux kernel
> > reorder block devices and change names of block devices. Becouse
> > scsi-id, wwn-id and other is a symbol links to block device names than
> > on change block device name change path to device.
> > This causes the server to stop working.
> >
> > For example, on server present ZFS pool with attached device by scsi-id
> > # zpool status
> >    pool: pool
> >   state: ONLINE
> >    scan: scrub repaired 0 in 1h39m with 0 errors on Sun Oct  8 02:03:34 2017
> > config:
> >
> >      NAME                                      STATE     READ WRITE CKSUM
> >      pool                                      ONLINE       0     0     0
> >        scsi-3600144f0c7a5bc61000058d3b96d001d  ONLINE       0     0     0
> >
> > Before export new block device from storage to hba, scsi-id have next
> > path to device:
> > /dev/disk/by-id/scsi-3600144f0c7a5bc61000058d3b96d001d -> ../../sdd
> >
> > When added new block device by FC-switch, FC-HBA kernel change block
> > device names:
> > /dev/disk/by-id/scsi-3600144f0c7a5bc61000058d3b96d001d -> ../../sdf
> >
> > and ZFS can't access to device until reboot (partprobe, zpool online
> > -e pool scsi-3600144f0c7a5bc61000058d3b96d001d - may help or may not
> > help)
> >
> Hmm. That really is curious; typically existing devices will not be
> reassigned. Especially not if they are in use by something.
> And the FC layer is going into quite some lengths to prevent this from
> happening.
> So this really looks more like an issue with how exactly this 'adding
> new block device' step was done.
>
> > Is there any way to fix or change this behavior of the kernel?
> >
> As I said, this typically does not happen.
> It would need closer examination to figure out what really happened.
>
> > It may be more reasonable to immediately assign an unique persistent
> > identifier of device and linking other identifiers with it?
> >
> Which is what we try ...
>
> > Also I think this is not specific problem of ZFS. And can occur with other
> > file system modules.Moreover, I had previously encountered a similar
> > problem - NetAPP
> > storage attached to servers by FC and export multiple LUN - suddenly
> > decided to change the order of LUNs and Ext4 on servers is switch to
> > readonly mode because driver detect changes of magic number in
> > superblocks of partitions.
> >
> Suddently changing the order of LUNs is _not_ what is supposed to
> happen. This really sounds more like an issue with NetApp.
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke            Teamlead Storage & Networking
> hare@suse.de                              +49 911 74053 688
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
> HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Block device naming
  2019-05-16 13:49   ` Alibek Amaev
@ 2019-05-16 14:07     ` Hannes Reinecke
  2019-05-17 11:24       ` Alibek Amaev
  0 siblings, 1 reply; 5+ messages in thread
From: Hannes Reinecke @ 2019-05-16 14:07 UTC (permalink / raw)
  To: Alibek Amaev; +Cc: linux-block, linux-scsi

On 5/16/19 3:49 PM, Alibek Amaev wrote:
> I have more example from IRL:
> In Aug 2018 I was start server with attached storages by FC from ZS3
> and ZS5 (it is Oracle ZFS Storage Appliance, not NetApp and also
> export space as LUN) server use one LUN from ZS5. And recently on
> server stopped all IO on this exported LUN  and io-wait is grow, in
> dmesg no any errors or any changes about FC, no errors in
> /var/log/kern.log* /var/log/syslog.log*, no throttling, no edac errors
> or other.
> But before reboot I saw:
> wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdc
> I try to run partprobe or try to copy from this block device some data
> to /dev/null by dd - operations wasn't finished IO is blocked
> And after reboot i seen:
> wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdd
> And server is run ok.
> 
> Also I have LUN exported from storage in shared mode and it accesible
> for all servers by FC. Currently this LUN not need, but now I doubt it
> is possible to safely remove it...
> 
It's all a bit conjecture at this point.
'sdc' could be show up as 'sdd' after the next reboot, with no 
side-effects whatsoever.
At the same time, 'sdc' could have been blocked by a host of reasons, 
none of which are related to the additional device being exported.

It doesn't really look like an issue with device naming; you would need 
to do proper investigation on your server to figure out why I/O stopped.
Device renaming is typically the least likely cause here.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		   Teamlead Storage & Networking
hare@suse.de			               +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Block device naming
  2019-05-16 14:07     ` Hannes Reinecke
@ 2019-05-17 11:24       ` Alibek Amaev
  0 siblings, 0 replies; 5+ messages in thread
From: Alibek Amaev @ 2019-05-17 11:24 UTC (permalink / raw)
  To: Hannes Reinecke; +Cc: linux-block, linux-scsi

I understand that changing the block device name does not matter
between reboots.
But as I understood in these cases, the order of HCTL (Host: Channel:
Target: Lun) for devices is changed. Unfortunately, I did not capture
the order of HCTL before the failure and I can't provide evidence. But
if I rely on my brain, then I know that the order of the HCTL before
the failure was different in all the cases presented.
This is indirectly confirmed by how the state of the pool in zfs is
demonstrated. And it seems that it depends on how the device was added
(by scsi-id or by wwn-id).
By scsi-id (when there were messages in the dmesg about device
changes), the failure was shown as follows:
---
# zpool status
  pool: pool
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://zfsonlinux.org/msg/ZFS-8000-HC
  scan: scrub repaired 0 in 1h39m with 0 errors on Sun Oct  8 02:03:34 2017
config:

    NAME                                      STATE     READ WRITE CKSUM
    pool                                      UNAVAIL      0     0
0  insufficient replicas
      scsi-3600144f0c7a5bc61000058d3b96d001d  FAULTED      3     0
0  too many errors

errors: 51 data errors, use '-v' for a list
---
Than in normal state zpool status show:
---
# zpool status
  pool: pool
 state: ONLINE
  scan: scrub repaired 0 in 1h39m with 0 errors on Sun Oct  8 02:03:34 2017
config:

    NAME                                      STATE     READ WRITE CKSUM
    pool                                      ONLINE       0     0     0
      scsi-3600144f0c7a5bc61000058d3b96d001d  ONLINE       0     0     0

errors: No known data errors
---

And in another case, when the LUN is imported by wwn-id (and now any
errors in dmesg) in error state, zpool status is:
---
# zpool status
  pool: pool1
 state: ONLINE
  scan: scrub repaired 0B in 17h30m with 0 errors on Sun Apr 14 17:54:55 2019
config:

NAME                                      STATE     READ WRITE CKSUM
pool1                                     ONLINE       0     0     0
  sdc                                     ONLINE       0     0     0

errors: No known data errors
---
In the status there are no errors, but show block device name from /dev/
Than in normal state zpool status show wwn-id from /dev/disk/by-id
instead of device name from /dev:
---
root@lpr11a:~# zpool status
  pool: pool1
 state: ONLINE
  scan: scrub repaired 0B in 17h30m with 0 errors on Sun Apr 14 17:54:55 2019
config:

NAME                                      STATE     READ WRITE CKSUM
pool1                                     ONLINE       0     0     0
  wwn-0x600144f0b49c14d100005b7af8ee001c  ONLINE       0     0     0

errors: No known data errors
---

P.S. I would also like to note /dev/disk is not reflect reality - SSD
are not disks.

On Thu, May 16, 2019 at 5:07 PM Hannes Reinecke <hare@suse.de> wrote:
>
> On 5/16/19 3:49 PM, Alibek Amaev wrote:
> > I have more example from IRL:
> > In Aug 2018 I was start server with attached storages by FC from ZS3
> > and ZS5 (it is Oracle ZFS Storage Appliance, not NetApp and also
> > export space as LUN) server use one LUN from ZS5. And recently on
> > server stopped all IO on this exported LUN  and io-wait is grow, in
> > dmesg no any errors or any changes about FC, no errors in
> > /var/log/kern.log* /var/log/syslog.log*, no throttling, no edac errors
> > or other.
> > But before reboot I saw:
> > wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdc
> > I try to run partprobe or try to copy from this block device some data
> > to /dev/null by dd - operations wasn't finished IO is blocked
> > And after reboot i seen:
> > wwn-0x600144f0b49c14d100005b7af8ee001c -> ../../sdd
> > And server is run ok.
> >
> > Also I have LUN exported from storage in shared mode and it accesible
> > for all servers by FC. Currently this LUN not need, but now I doubt it
> > is possible to safely remove it...
> >
> It's all a bit conjecture at this point.
> 'sdc' could be show up as 'sdd' after the next reboot, with no
> side-effects whatsoever.
> At the same time, 'sdc' could have been blocked by a host of reasons,
> none of which are related to the additional device being exported.
>
> It doesn't really look like an issue with device naming; you would need
> to do proper investigation on your server to figure out why I/O stopped.
> Device renaming is typically the least likely cause here.
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke                Teamlead Storage & Networking
> hare@suse.de                                   +49 911 74053 688
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
> HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, back to index

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-16 12:26 Block device naming Alibek Amaev
2019-05-16 12:33 ` Hannes Reinecke
2019-05-16 13:49   ` Alibek Amaev
2019-05-16 14:07     ` Hannes Reinecke
2019-05-17 11:24       ` Alibek Amaev

Linux-Block Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-block/0 linux-block/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-block linux-block/ https://lore.kernel.org/linux-block \
		linux-block@vger.kernel.org linux-block@archiver.kernel.org
	public-inbox-index linux-block

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-block


AGPL code for this site: git clone https://public-inbox.org/ public-inbox