All of lore.kernel.org
 help / color / mirror / Atom feed
* Devices expected?
@ 2018-09-06 20:26 Gruher, Joseph R
  2018-09-06 21:05 ` Keith Busch
  0 siblings, 1 reply; 5+ messages in thread
From: Gruher, Joseph R @ 2018-09-06 20:26 UTC (permalink / raw)


Hi-

I'm running Ubuntu 16.04.5 with kernel 4.18.6.  I connected two NVMeoF volumes:

rsa at rsd23n03:~$ sudo nvme connect -t rdma -i 32 -a 10.5.0.8 -n joe-vm-1
rsa at rsd23n03:~$ sudo nvme connect -t rdma -i 32 -a 10.5.0.8 -n joe-vm-2

Looking in disk stats afterwards, I see nvme2n1 and nvme3n1 as I would expect, but also there are these nvme2c2n1 and nvme3c3n1 devices present.  Note nvme0n1 and nvme1n1 are actual local NVMe devices, not NVMeoF connected.

rsa at rsd23n03:~$ cat /proc/diskstats
   7       0 loop0 5 0 16 0 0 0 0 0 0 0 0
   7       1 loop1 0 0 0 0 0 0 0 0 0 0 0
   7       2 loop2 0 0 0 0 0 0 0 0 0 0 0
   7       3 loop3 0 0 0 0 0 0 0 0 0 0 0
   7       4 loop4 0 0 0 0 0 0 0 0 0 0 0
   7       5 loop5 0 0 0 0 0 0 0 0 0 0 0
   7       6 loop6 0 0 0 0 0 0 0 0 0 0 0
   7       7 loop7 0 0 0 0 0 0 0 0 0 0 0
 259       0 nvme0n1 15222 0 696826 0 861 1596 28938 0 0 213856 218132
 259       1 nvme0n1p1 1180 0 14056 0 2 0 2 0 0 24 964
 259       2 nvme0n1p2 13828 0 669770 0 855 1596 28936 0 0 1612 4800
 259       3 nvme0n1p3 112 0 8056 0 0 0 0 0 0 8 8
 259       4 nvme1n1 135 0 6504 0 0 0 0 0 0 8 8
  11       0 sr0 0 0 0 0 0 0 0 0 0 0 0
   0       0 nvme2c2n1 49 0 2208 0 0 0 0 0 0 8 8
 259       6 nvme2n1 0 0 0 0 0 0 0 0 0 0 0
   0       0 nvme3c3n1 49 0 2208 0 0 0 0 0 0 4 4
 259       8 nvme3n1 0 0 0 0 0 0 0 0 0 0 0

I read 1GB from each NVMeoF volume with DD, and the activity seems to get tracked against the 'c' devices instead of nvme2n1 and nvme3n1:

rsa at rsd23n03:~$ cat /proc/diskstats
   7       0 loop0 5 0 16 0 0 0 0 0 0 0 0
   7       1 loop1 0 0 0 0 0 0 0 0 0 0 0
   7       2 loop2 0 0 0 0 0 0 0 0 0 0 0
   7       3 loop3 0 0 0 0 0 0 0 0 0 0 0
   7       4 loop4 0 0 0 0 0 0 0 0 0 0 0
   7       5 loop5 0 0 0 0 0 0 0 0 0 0 0
   7       6 loop6 0 0 0 0 0 0 0 0 0 0 0
   7       7 loop7 0 0 0 0 0 0 0 0 0 0 0
 259       0 nvme0n1 15226 0 696986 0 889 1618 29338 0 0 312432 316708
 259       1 nvme0n1p1 1180 0 14056 0 2 0 2 0 0 24 964
 259       2 nvme0n1p2 13832 0 669930 0 883 1618 29336 0 0 1620 4808
 259       3 nvme0n1p3 112 0 8056 0 0 0 0 0 0 8 8
 259       4 nvme1n1 135 0 6504 0 0 0 0 0 0 8 8
  11       0 sr0 0 0 0 0 0 0 0 0 0 0 0
   0       0 nvme2c2n1 8241 0 2099616 0 0 0 0 0 0 2620 4032
 259       6 nvme2n1 0 0 0 0 0 0 0 0 0 0 0
   0       0 nvme3c3n1 8241 0 2099616 0 0 0 0 0 0 1316 1856
 259       8 nvme3n1 0 0 0 0 0 0 0 0 0 0 0

Also, when I run lsblk, I get an error related to these devices:

rsa at rsd23n03:~$ lsblk
lsblk: nvme2c2n1: unknown device name
lsblk: nvme3c3n1: unknown device name
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1     259:0    0 119.2G  0 disk
??nvme0n1p3 259:3    0   977M  0 part [SWAP]
??nvme0n1p1 259:1    0   512M  0 part /boot/efi
??nvme0n1p2 259:2    0 117.8G  0 part /
nvme3n1     259:8    0 931.5G  0 disk
nvme2n1     259:6    0 931.5G  0 disk
sr0          11:0    1  1024M  0 rom
nvme1n1     259:4    0 931.5G  0 disk

They do not appear to exist in /dev/:

rsa at rsd23n03:~$ ls /dev/nvm*
/dev/nvme0    /dev/nvme0n1p1  /dev/nvme0n1p3  /dev/nvme1n1  /dev/nvme2n1  /dev/nvme3n1
/dev/nvme0n1  /dev/nvme0n1p2  /dev/nvme1      /dev/nvme2    /dev/nvme3    /dev/nvme-fabrics

The dmesg entries around the connect seem OK (I also did a discovery here before the connect):

[  179.240487] nvme nvme2: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery", addr 10.5.0.8:4420
[  179.241120] nvme nvme2: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
[  200.304085] nvme nvme2: creating 32 I/O queues.
[  200.943936] nvme nvme2: new ctrl: NQN "joe-vm-1", addr 10.5.0.8:4420
[  200.949023]  nvme2n1:
[  207.190511] nvme nvme3: creating 32 I/O queues.
[  207.819411] nvme nvme3: new ctrl: NQN "joe-vm-2", addr 10.5.0.8:4420
[  207.824087]  nvme3n1:

The drives in the target are Intel P4500.  Target is also running Ubuntu 16.04 with 4.18.6 and configuration looks like this:

rsa at storage-1:~$ sudo nvmetcli ls
o- / ........................................................................................................ [...]
  o- hosts .................................................................................................. [...]
  o- ports .................................................................................................. [...]
  | o- 1 ............................................................. [trtype=rdma, traddr=10.5.0.8, trsvcid=4420]
  |   o- referrals .......................................................................................... [...]
  |   o- subsystems ......................................................................................... [...]
  |     o- joe-vm-1 ......................................................................................... [...]
  |     o- joe-vm-2 ......................................................................................... [...]
  |     o- nqn01 ............................................................................................ [...]
  |     o- nqn02 ............................................................................................ [...]
  |     o- nqn03 ............................................................................................ [...]
  |     o- nqn04 ............................................................................................ [...]
  |     o- nqn05 ............................................................................................ [...]
  |     o- nqn06 ............................................................................................ [...]
  |     o- nqn07 ............................................................................................ [...]
  |     o- nqn08 ............................................................................................ [...]
  |     o- nqn09 ............................................................................................ [...]
  |     o- nqn10 ............................................................................................ [...]
  |     o- nqn11 ............................................................................................ [...]
  |     o- nqn12 ............................................................................................ [...]
  |     o- nqn13 ............................................................................................ [...]
  o- subsystems ............................................................................................. [...]
    o- joe-vm-1 ............................................... [version=1.3, allow_any=1, serial=b6564d74b81e2ca0]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 ............................. [path=/dev/nvme14n1, uuid=9f7bca77-33d1-49bc-b36f-2e8997474ffb, enabled]
    o- joe-vm-2 ............................................... [version=1.3, allow_any=1, serial=b729202cea8db493]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 ............................. [path=/dev/nvme15n1, uuid=7b5d87da-52c8-47d6-91e8-5f246bae142a, enabled]
    o- nqn01 .................................................. [version=1.3, allow_any=1, serial=85b4c9db8d69b115]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme1n1, uuid=c311956d-c198-4291-aa1a-5544b8c529af, enabled]
    o- nqn02 .................................................. [version=1.3, allow_any=1, serial=769b3c0093f050fa]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme2n1, uuid=e30fa42f-52c7-4f56-b956-458f6e48a815, enabled]
    o- nqn03 .................................................. [version=1.3, allow_any=1, serial=7c5ff700733288ac]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme3n1, uuid=2361f047-9c8b-4c8f-ba8d-7a61327dd9be, enabled]
    o- nqn04 .................................................. [version=1.3, allow_any=1, serial=bbcdf5c1976d7b89]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme4n1, uuid=a103ea20-c678-4171-a256-6e8ab5f07aea, enabled]
    o- nqn05 .................................................. [version=1.3, allow_any=1, serial=b5d6519ea10ee571]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme5n1, uuid=062ebdea-f68d-4f8b-8d33-483b13fdc120, enabled]
    o- nqn06 .................................................. [version=1.3, allow_any=1, serial=36af7cd910e698ae]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme6n1, uuid=317b6d55-af11-460f-9730-e06d734aca97, enabled]
    o- nqn07 .................................................. [version=1.3, allow_any=1, serial=9038280e2442e469]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme7n1, uuid=bec91592-2ebb-4310-804e-0eb336bc166a, enabled]
    o- nqn08 .................................................. [version=1.3, allow_any=1, serial=8fcb1df6c2c22328]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme8n1, uuid=96e2d208-3239-4878-8bde-7fb9821684cc, enabled]
    o- nqn09 .................................................. [version=1.3, allow_any=1, serial=ca2739e704a71b40]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 .............................. [path=/dev/nvme9n1, uuid=4a203979-2dc2-465b-8ce9-8e6a277ff1de, enabled]
    o- nqn10 ................................................... [version=1.3, allow_any=1, serial=70093977d1ebc0e]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 ............................. [path=/dev/nvme10n1, uuid=14c6f66d-f3c1-4fe1-a639-f7fde5568e34, enabled]
    o- nqn11 .................................................. [version=1.3, allow_any=1, serial=ee7551c6e84e45cb]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 ............................. [path=/dev/nvme11n1, uuid=a5f24c39-f3b2-4190-a7e5-f53cdf783261, enabled]
    o- nqn12 .................................................. [version=1.3, allow_any=1, serial=67a051ebffd3e8fe]
    | o- allowed_hosts ...................................................................................... [...]
    | o- namespaces ......................................................................................... [...]
    |   o- 1 ............................. [path=/dev/nvme12n1, uuid=3d6075fa-6e4e-46d3-918c-f9a1e3221aa1, enabled]
    o- nqn13 .................................................. [version=1.3, allow_any=1, serial=79327e3ce01b319a]
      o- allowed_hosts ...................................................................................... [...]
      o- namespaces ......................................................................................... [...]
        o- 1 ............................. [path=/dev/nvme13n1, uuid=24059ddc-e748-451e-97f7-c91482937069, enabled]

In general, my question is whether this is expected behavior?  If a bug, is it fixed in a newer kernel?  We are trying to track some disk stats and the unexpected (at least from our perspective) behavior in /proc/diskstats is causing us some confusion.  Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Devices expected?
  2018-09-06 20:26 Devices expected? Gruher, Joseph R
@ 2018-09-06 21:05 ` Keith Busch
  2018-09-07 15:40   ` Gruher, Joseph R
  0 siblings, 1 reply; 5+ messages in thread
From: Keith Busch @ 2018-09-06 21:05 UTC (permalink / raw)


On Thu, Sep 06, 2018@08:26:55PM +0000, Gruher, Joseph R wrote:
> In general, my question is whether this is expected behavior?  If a
> bug, is it fixed in a newer kernel?  We are trying to track some disk
> stats and the unexpected (at least from our perspective) behavior in
> /proc/diskstats is causing us some confusion.  Thanks!

With the exception of the 'lsblk' error message, I think all your results
are expected when CONFIG_NVME_MUTLIPATH=y.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Devices expected?
  2018-09-06 21:05 ` Keith Busch
@ 2018-09-07 15:40   ` Gruher, Joseph R
  2018-09-07 15:55     ` Keith Busch
  0 siblings, 1 reply; 5+ messages in thread
From: Gruher, Joseph R @ 2018-09-07 15:40 UTC (permalink / raw)


> > In general, my question is whether this is expected behavior?  If a
> > bug, is it fixed in a newer kernel?  We are trying to track some disk
> > stats and the unexpected (at least from our perspective) behavior in
> > /proc/diskstats is causing us some confusion.  Thanks!
> 
> With the exception of the 'lsblk' error message, I think all your results are
> expected when CONFIG_NVME_MUTLIPATH=y.

Thanks Keith.  I just pulled the Ubuntu mainline kernel package, rather than building my own, so I guess they have NVMeoF multipathing on by default.

Does nvme2c1n1 represent one path for the nvme2n1 device to use?  If I connected a second path I would see something like nvme2c2n1 pop up as well?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Devices expected?
  2018-09-07 15:40   ` Gruher, Joseph R
@ 2018-09-07 15:55     ` Keith Busch
  2018-09-11  7:23       ` Christoph Hellwig
  0 siblings, 1 reply; 5+ messages in thread
From: Keith Busch @ 2018-09-07 15:55 UTC (permalink / raw)


On Fri, Sep 07, 2018@03:40:45PM +0000, Gruher, Joseph R wrote:
> > > In general, my question is whether this is expected behavior?  If a
> > > bug, is it fixed in a newer kernel?  We are trying to track some disk
> > > stats and the unexpected (at least from our perspective) behavior in
> > > /proc/diskstats is causing us some confusion.  Thanks!
> > 
> > With the exception of the 'lsblk' error message, I think all your results are
> > expected when CONFIG_NVME_MUTLIPATH=y.
> 
> Thanks Keith.  I just pulled the Ubuntu mainline kernel package, rather than building my own, so I guess they have NVMeoF multipathing on by default.

If you do not want to use this feature, you may disable it with kernel
parameter "nvme_core.multipath=0" if modifying the kernel or its config
is not desirable.

> Does nvme2c1n1 represent one path for the nvme2n1 device to use?  If I connected a second path I would see something like nvme2c2n1 pop up as well?

Yes, that is what that those names represent.

In your example, the '2' in nvme2 is the subsystem instance, which
is just a unique number software assigned to the subsystem. The '1'
in nvme2c1 is the controller id that identify controller reports, so it
may not necessarily start at 1 or increase sequentially.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Devices expected?
  2018-09-07 15:55     ` Keith Busch
@ 2018-09-11  7:23       ` Christoph Hellwig
  0 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2018-09-11  7:23 UTC (permalink / raw)


On Fri, Sep 07, 2018@09:55:25AM -0600, Keith Busch wrote:
> > Does nvme2c1n1 represent one path for the nvme2n1 device to use?  If I connected a second path I would see something like nvme2c2n1 pop up as well?
> 
> Yes, that is what that those names represent.
> 
> In your example, the '2' in nvme2 is the subsystem instance, which
> is just a unique number software assigned to the subsystem. The '1'
> in nvme2c1 is the controller id that identify controller reports, so it
> may not necessarily start at 1 or increase sequentially.

Also note that we will only use the multipath code if the namespace
is marked as shared namespace in NMIC.  For private namespaces we
skip the multipath detection.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-09-11  7:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-06 20:26 Devices expected? Gruher, Joseph R
2018-09-06 21:05 ` Keith Busch
2018-09-07 15:40   ` Gruher, Joseph R
2018-09-07 15:55     ` Keith Busch
2018-09-11  7:23       ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.