All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] nvme: fix internal passthru error messages
@ 2022-04-11  3:12 Chaitanya Kulkarni
  2022-04-11  3:12 ` [PATCH 1/3] nvmet: handle admin default command set identifier Chaitanya Kulkarni
                   ` (2 more replies)
  0 siblings, 3 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11  3:12 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, sagi, kbusch, Chaitanya Kulkarni

Hi,

Since addition of the nvme_log_error(), we are getting reports of
the error messages are printed from internal passthru commands.
This also uncovered target execute identify code path where we don't
handle the case for NVME_CSI_NVM and in the host where we issue
non-mdts limit for the discovery contoller.

First patch fixes target execute identify case, second patch prevents
configuring non-mdts-limits for the dicovery controller and thrid patch
masks the internal passthru error warnings tested with nvme error
injection.

Regarding the third patch Alan has reported it does mask
nvme-admin-passthru commands errors when used with error injection
framework, I wasn't able to reporduce that, maybe we can wait for Alan
to confirm.

Following is the test execution log with :-

1. Tests with CONFIG_BLK_DEV_ZONED enabled.
1.1 nvme-admin-passthru error injection logging error mesaages.
1.2 blktests results.
1.3 Test fio verify job with NVMeOF ZBD backend.
2. Tests with CONFIG_BLK_DEV_ZONED disabled :-
2.1 nvme-admin-passthru error injection logging error mesaages.
2.2 blktests results.
2.3 Try and create NVMeOF controller with ZBD backend & fail grececully.

-ck

Chaitanya Kulkarni (3):
  nvmet: handle admin default command set identifier
  nvme-core: don't check non-mdts for disc ctrl
  nvme-core: mark internal passthru req REQ_QUIET

 drivers/nvme/host/core.c        |  7 ++++++-
 drivers/nvme/target/admin-cmd.c | 22 ++++++++++++++--------
 drivers/nvme/target/nvmet.h     |  2 +-
 drivers/nvme/target/zns.c       |  2 +-
 4 files changed, 22 insertions(+), 11 deletions(-)

1. Tests with CONFIG_BLK_DEV_ZONED enabled :-
-----------------------------------------------------------------------

nvme (nvme-5.18) # git log -3
commit 0d77072e26fa074da0c36e423a5802fd18ef976d (HEAD -> nvme-5.18)
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Fri Apr 8 16:45:18 2022 -0700

    nvme-core: mark internal passthru req REQ_QUIET
    
    Mark internal passthru requests quiet in the submission path with
    RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
    path, if nvme request is resulted in the error and request is marked
    RQF_QUIET then don't log the error with nvme_error_log().
    
    Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>

commit d4df054957f3cbd37601d65d8e7460aaaee8733d
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Fri Apr 8 15:50:10 2022 -0700

    nvme-core: don't check non-mdts for disc ctrl
    
    Don't check the non mdts limits for the discovery controller as we
    don't support I/O command set, this also masks the following warning
    reported by the nvme_log_error() when running blktest nvme/002: -
    
    [ 2005.155946] run blktests nvme/002 at 2022-04-09 16:57:47
    [ 2005.192223] loop: module loaded
    [ 2005.196429] nvmet: adding nsid 1 to subsystem blktests-subsystem-0
    [ 2005.200334] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
    
    <------------------------------SNIP---------------------------------->
    
    [ 2008.958108] nvmet: adding nsid 1 to subsystem blktests-subsystem-997
    [ 2008.962082] nvmet: adding nsid 1 to subsystem blktests-subsystem-998
    [ 2008.966102] nvmet: adding nsid 1 to subsystem blktests-subsystem-999
    [ 2008.973132] nvmet: creating discovery controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN testhostnqn.
    *[ 2008.973196] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR*
    [ 2008.974595] nvme nvme1: new ctrl: "nqn.2014-08.org.nvmexpress.discovery"
    [ 2009.103248] nvme nvme1: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
    
    Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>

commit 18aac654ff824a4b169e51d72b674245a7a1b79c
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Thu Apr 7 12:11:37 2022 -0700

    nvmet: handle admin default command set identifier
    
    The NVMeOF in kernel host when configuring non-mdts limits uses
    idnetify controller cns value to NVME_ID_CNS_CS_CTRL and csi value to
    NVME_CSI_NVM. In target code we only handle case for NVME_ID_CNS_CS_CTRL
    and NVME_CSI_ZNS when CONFIG_BLK_DEV_ZONED is set.
    
    Handle missing case for CONFIG_BLK_DEV_ZONED and !CONFIG_BLK_DEV_ZONED
    so it can handle NVME_ID_CNS_CS_CTRL with NVME_CSI_NVM.
    
    Use this opportunity to add default for the NVME_ID_CNS_CTRL case which
    makes code uniform for all the cases in the
    nvmet_execute_identify_cns_cs_ctrl_nvm().
    
    Also, rename nvmet_execute_identify_ctrl() to
                 nvmet_execute_identify_cns_cs_ctrl_nvm() and
    nvmet_execute_identify_cns_cs_ctrl() to
    nvmet_execute_identify_cns_cs_ctrl_zns().
    
    This also masks the following warning reported by the nvme_log_error()
    when running blktest nvme/012:-
    
    [ 2131.702140] run blktests nvme/012 at 2022-04-09 16:59:53
    [ 2131.722144] loop0: detected capacity change from 0 to 2097152
    [ 2131.730586] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
    [ 2131.738826] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN testhostnqn.
    *[ 2131.738911] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR*
    [ 2131.740540] nvme nvme1: creating 48 I/O queues.
    [ 2131.743925] nvme nvme1: new ctrl: "blktests-subsystem-1"
    [ 2132.790058] XFS (nvme1n1): Mounting V5 Filesystem
    [ 2132.793667] XFS (nvme1n1): Ending clean mount
    [ 2132.794030] xfs filesystem being mounted at /mnt/blktests supports timestamps until 2038 (0x7fffffff)
    [ 2142.471812] XFS (nvme1n1): Unmounting Filesystem
    [ 2142.492566] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
    
    Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) # grep -i zoned .config
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_DEBUG_FS_ZONED=y
# CONFIG_DM_ZONED is not set
nvme (nvme-5.18) # ./compile_nvme.sh 
+ umount /mnt/nvme0n1
+ clear_dmesg
./compile_nvme.sh: line 3: clear_dmesg: command not found
umount: /mnt/nvme0n1: no mount point specified.
+ ./delete.sh
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 0 controller(s)

real	0m0.010s
user	0m0.001s
sys	0m0.004s
+ rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
+ rmdir /sys/kernel/config/nvmet/ports/1
rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file or directory
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
./delete.sh: line 14: /sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such file or directory
+ rmdir 'config/nullb/nullb*'
rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
+ modprobe -r nvme-fabrics
+ modprobe -r nvme_loop
+ modprobe -r nvmet
+ modprobe -r nvme
+ sleep 1
+ modprobe -r nvme-core
+ lsmod
+ grep nvme
+ sleep 1
+ git diff
+ sleep 1
++ nproc
+ make -j 48 M=drivers/nvme/target/ clean
++ nproc
+ make -j 48 M=drivers/nvme/ modules
  CC [M]  drivers/nvme/target/core.o
  CC [M]  drivers/nvme/target/configfs.o
  CC [M]  drivers/nvme/target/admin-cmd.o
  CC [M]  drivers/nvme/target/fabrics-cmd.o
  CC [M]  drivers/nvme/target/discovery.o
  CC [M]  drivers/nvme/target/io-cmd-file.o
  CC [M]  drivers/nvme/target/io-cmd-bdev.o
  CC [M]  drivers/nvme/target/passthru.o
  CC [M]  drivers/nvme/target/zns.o
  CC [M]  drivers/nvme/target/trace.o
  CC [M]  drivers/nvme/target/loop.o
  CC [M]  drivers/nvme/target/rdma.o
  CC [M]  drivers/nvme/target/fc.o
  CC [M]  drivers/nvme/target/fcloop.o
  CC [M]  drivers/nvme/target/tcp.o
  LD [M]  drivers/nvme/target/nvme-loop.o
  LD [M]  drivers/nvme/target/nvme-fcloop.o
  LD [M]  drivers/nvme/target/nvmet.o
  LD [M]  drivers/nvme/target/nvmet-fc.o
  LD [M]  drivers/nvme/target/nvmet-tcp.o
  LD [M]  drivers/nvme/target/nvmet-rdma.o
  MODPOST drivers/nvme/Module.symvers
  CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
  CC [M]  drivers/nvme/target/nvme-loop.mod.o
  CC [M]  drivers/nvme/target/nvmet-fc.mod.o
  CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
  CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
  CC [M]  drivers/nvme/target/nvmet.mod.o
  LD [M]  drivers/nvme/target/nvmet-tcp.ko
  LD [M]  drivers/nvme/target/nvme-loop.ko
  LD [M]  drivers/nvme/target/nvmet-rdma.ko
  LD [M]  drivers/nvme/target/nvmet-fc.ko
  LD [M]  drivers/nvme/target/nvme-fcloop.ko
  LD [M]  drivers/nvme/target/nvmet.ko
+ HOST=drivers/nvme/host
+ TARGET=drivers/nvme/target
++ uname -r
+ HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
++ uname -r
+ TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
+ cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
+ cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
+ ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
total 6.3M
-rw-r--r--. 1 root root 2.7M Apr  9 15:46 nvme-core.ko
-rw-r--r--. 1 root root 426K Apr  9 15:46 nvme-fabrics.ko
-rw-r--r--. 1 root root 925K Apr  9 15:46 nvme-fc.ko
-rw-r--r--. 1 root root 714K Apr  9 15:46 nvme.ko
-rw-r--r--. 1 root root 856K Apr  9 15:46 nvme-rdma.ko
-rw-r--r--. 1 root root 799K Apr  9 15:46 nvme-tcp.ko

/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
total 6.3M
-rw-r--r--. 1 root root 475K Apr  9 15:46 nvme-fcloop.ko
-rw-r--r--. 1 root root 419K Apr  9 15:46 nvme-loop.ko
-rw-r--r--. 1 root root 734K Apr  9 15:46 nvmet-fc.ko
-rw-r--r--. 1 root root 3.2M Apr  9 15:46 nvmet.ko
-rw-r--r--. 1 root root 822K Apr  9 15:46 nvmet-rdma.ko
-rw-r--r--. 1 root root 671K Apr  9 15:46 nvmet-tcp.ko
+ sync
+ sync
+ sync
+ modprobe nvme
+ echo 'Press enter to continue ...'
Press enter to continue ...
+ read next

1.1 nvme-admin-passthru error injection logging error mesaages :- 
------------------------------------------------------------------------
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) # sh error_inject.sh 
++ NN=1
++ NQN=testnqn
++ let NR_DEVICES=NN+1
++ modprobe -r null_blk
++ modprobe -r nvme
++ modprobe null_blk nr_devices=0
++ modprobe nvme
++ modprobe nvme-fabrics
++ modprobe nvmet
++ modprobe nvme-loop
++ dmesg -c
++ sleep 2
++ tree /sys/kernel/config
/sys/kernel/config
├── nullb
│   └── features
└── nvmet
    ├── hosts
    ├── ports
    └── subsystems

5 directories, 1 file
++ sleep 1
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
+++ shuf -i 1-1 -n 1
++ for i in `shuf -i  1-$NN -n $NN`
++ mkdir config/nullb/nullb1
++ echo 1
++ echo 4096
++ echo 2048
++ echo 1
+++ cat config/nullb/nullb1/index
++ IDX=0
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
++ echo ' ####### /dev/nullb0'
 ####### /dev/nullb0
++ echo -n /dev/nullb0
++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
/dev/nullb0
++ echo 1
++ dmesg -c
[  735.981707] nvmet: adding nsid 1 to subsystem testnqn
++ mkdir /sys/kernel/config/nvmet/ports/1/
++ echo -n loop
++ echo -n 1
++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
++ echo transport=loop,nqn=testnqn
++ sleep 1
++ mount
++ column -t
++ grep nvme
++ dmesg -c
[  735.990847] nvmet: creating nvm controller 1 for subsystem testnqn for NQN nqn.2014-08.org.nvmexpress:uuid:8a582f37-906c-4e38-b4f1-63da52eb618c.
[  735.990963] nvme nvme1: creating 48 I/O queues.
[  735.994732] nvme nvme1: new ctrl: "testnqn"
Node SN Model Namespace Usage Format FW Rev 
nvme1n1 7eb450b4757e46995c7e Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0 
NVMe status: Access Denied: Access to the namespace and/or LBA range is denied due to lack of access rights(0x4286)
Node SN Model Namespace Usage Format FW Rev 
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0 
[  737.006671] FAULT_INJECTION: forcing a failure.
               name fault_inject, interval 1, probability 100, space 0, times 1000
[  737.006679] CPU: 21 PID: 38553 Comm: kworker/21:0 Tainted: G           OE     5.17.0-rc2nvme+ #70
[  737.006683] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[  737.006686] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[  737.006694] Call Trace:
[  737.006713]  <TASK>
[  737.006716]  dump_stack_lvl+0x48/0x5e
[  737.006722]  should_fail.cold+0x32/0x37
[  737.006727]  nvme_should_fail+0x38/0x90 [nvme_core]
[  737.006739]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[  737.006743]  nvmet_req_complete+0x11/0x50 [nvmet]
[  737.006752]  process_one_work+0x1af/0x380
[  737.006756]  ? rescuer_thread+0x370/0x370
[  737.006758]  worker_thread+0x50/0x3a0
[  737.006771]  ? rescuer_thread+0x370/0x370
[  737.006773]  kthread+0xe7/0x110
[  737.006776]  ? kthread_complete_and_exit+0x20/0x20
[  737.006780]  ret_from_fork+0x22/0x30
[  737.006786]  </TASK>
[  737.006792] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR 
[  737.011040] FAULT_INJECTION: forcing a failure.
               name fault_inject, interval 1, probability 100, space 0, times 999
[  737.011047] CPU: 22 PID: 10449 Comm: kworker/22:9 Tainted: G           OE     5.17.0-rc2nvme+ #70
[  737.011051] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[  737.011053] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[  737.011062] Call Trace:
[  737.011065]  <TASK>
[  737.011067]  dump_stack_lvl+0x48/0x5e
[  737.011074]  should_fail.cold+0x32/0x37
[  737.011079]  nvme_should_fail+0x38/0x90 [nvme_core]
[  737.011091]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[  737.011095]  nvmet_req_complete+0x11/0x50 [nvmet]
[  737.011103]  process_one_work+0x1af/0x380
[  737.011108]  worker_thread+0x50/0x3a0
[  737.011111]  ? rescuer_thread+0x370/0x370
[  737.011113]  kthread+0xe7/0x110
[  737.011117]  ? kthread_complete_and_exit+0x20/0x20
[  737.011120]  ret_from_fork+0x22/0x30
[  737.011126]  </TASK>
[  737.011133] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR 
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 1 controller(s)

real	0m0.339s
user	0m0.001s
sys	0m0.004s
+ rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
+ rmdir /sys/kernel/config/nvmet/ports/1
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn
+ rmdir config/nullb/nullb1
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files

1.2 blktests results :-
-----------------------------------------------------------------------
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) # cdblktests 
blktests (master) # ./check nvme
nvme/002 (create many subsystems and test discovery)         [passed]
    runtime  17.137s  ...  17.312s
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
    runtime  10.083s  ...  10.089s
nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  1.443s  ...  1.433s
nvme/005 (reset local loopback target)                       [passed]
    runtime  6.770s  ...  6.795s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.070s  ...  0.063s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.035s  ...  0.042s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  1.455s  ...  1.457s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  1.418s  ...  1.420s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  7.805s  ...  7.906s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  76.991s  ...  76.979s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  11.506s  ...  12.301s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  79.186s  ...  80.436s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  8.907s  ...  8.850s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  8.685s  ...  8.669s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [passed]
    runtime  11.425s  ...  10.842s
nvme/017 (create/delete many file-ns and test discovery)     [passed]
    runtime  11.678s  ...  11.080s
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  1.416s  ...  1.402s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  1.441s  ...  1.428s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  1.426s  ...  1.412s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  1.417s  ...  1.401s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  6.770s  ...  6.736s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  1.427s  ...  1.455s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  1.406s  ...  1.406s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  1.430s  ...  1.410s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  1.410s  ...  1.406s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  1.419s  ...  1.422s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  1.412s  ...  1.433s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  1.564s  ...  1.536s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
    runtime  0.186s  ...  0.192s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
    runtime  53.898s  ...  53.842s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
    runtime  0.018s  ...  0.019s

1.3 Test fio verify job with NVMeOF ZBD backend :-
-----------------------------------------------------------------------
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) #
nvme (nvme-5.18) # ./zbdev.sh 1
++ SUBSYS=/sys/kernel/config/nvmet/subsystems
++ PORT=/sys/kernel/config/nvmet/ports
++ NQN=testnqn
++ NN=1
++ main
++ unload
++ modprobe -r null_blk
++ modprobe null_blk zoned=1 nr_devices=0
++ modprobe nvme
++ modprobe nvme-fabrics
++ modprobe nvmet
++ modprobe nvme-loop
++ dmesg -c
++ sleep 2
++ make_subsys
++ tree /sys/kernel/config
/sys/kernel/config
├── nullb
│   └── features
└── nvmet
    ├── hosts
    ├── ports
    └── subsystems

5 directories, 1 file
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
++ echo -n 1
++ make_port
++ mkdir /sys/kernel/config/nvmet/ports/1/
++ echo -n loop
++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
++ sleep 1
++ connect_host
++ echo transport=loop,nqn=testnqn
++ make_ns
+++ shuf -i 1-1 -n 1
++ for i in `shuf -i  1-$NN -n $NN`
++ mkdir config/nullb/nullb1
++ echo 1
++ echo 4096
++ echo 1024
++ echo 128
++ echo 1
++ echo 1
+++ cat config/nullb/nullb1/index
++ IDX=0
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
++ echo ' ####### /dev/nullb0'
 ####### /dev/nullb0
++ echo -n /dev/nullb0
++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
/dev/nullb0
++ echo /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/enable
/sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/enable
++ echo 1
++ '[' 1 ']'
+++ wc -l
+++ ls -l /dev/nvme1
++ cnt=1
++ echo 1
1
++ '[' 1 -gt 1 ']'
++ sleep 1
++ dmesg -c
[   36.186752] nvmet: creating nvm controller 1 for subsystem testnqn for NQN nqn.2014-08.org.nvmexpress:uuid:7f0546f8-2eab-4195-b673-0a434fa67768.
[   36.186849] nvme nvme1: creating 48 I/O queues.
[   36.190938] nvme nvme1: new ctrl: "testnqn"
[   36.198983] nvmet: adding nsid 1 to subsystem testnqn
[   36.200458] nvme nvme1: rescanning namespaces.
++ '[' 1 ']'
+++ wc -l
+++ ls -l /dev/nvme1 /dev/nvme1n1
++ cnt=2
++ echo 2
2
++ '[' 2 -gt 1 ']'
++ break
++ dmesg -c
++ ls -lrth /dev/nvme0 /dev/nvme0n1 /dev/nvme1 /dev/nvme1n1 /dev/nvme-fabrics
crw-------. 1 root root 244,   0 Apr  9 16:24 /dev/nvme0
brw-rw----. 1 root disk 259,   0 Apr  9 16:24 /dev/nvme0n1
crw-------. 1 root root  10, 125 Apr  9 16:24 /dev/nvme-fabrics
crw-------. 1 root root 244,   1 Apr  9 16:24 /dev/nvme1
brw-rw----. 1 root disk 259,   2 Apr  9 16:24 /dev/nvme1n1
nvme (nvme-5.18) # blkzone report /dev/nvme1n1 
  start: 0x000000000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
  start: 0x000040000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
  start: 0x000080000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
  start: 0x0000c0000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
  start: 0x000100000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
  start: 0x000140000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
  start: 0x000180000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
  start: 0x0001c0000, len 0x040000, wptr 0x000000 reset:0 non-seq:0, zcond: 1(em) [type: 2(SEQ_WRITE_REQUIRED)]
nvme (nvme-5.18) # 
nvme (nvme-5.18) # 
nvme (nvme-5.18) # 
nvme (nvme-5.18) # 
nvme (nvme-5.18) # fio fio/zverify.fio --filename=/dev/nvme1n1  --showcmd
fio --name=write-and-verify --rw=randwrite --zonemode=zbd --bs=4k --direct=1 --ioengine=libaio --iodepth=4 --norandommap --randrepeat=1 --verify=crc32c --size=800m --group_reporting 
nvme (nvme-5.18) # fio fio/zverify.fio --filename=/dev/nvme1n1 
write-and-verify: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
fio-3.27
Starting 1 process
/dev/nvme1n1: rounded down io_size from 838860800 to 805306368
Jobs: 1 (f=1): [V(1)][100.0%][r=418MiB/s][r=107k IOPS][eta 00m:00s]                   
write-and-verify: (groupid=0, jobs=1): err= 0: pid=3219: Sat Apr  9 16:28:57 2022
  read: IOPS=107k, BW=417MiB/s (438MB/s)(768MiB/1840msec)
    slat (nsec): min=2414, max=66507, avg=5908.64, stdev=3564.30
    clat (nsec): min=11421, max=82798, avg=30300.22, stdev=7387.49
     lat (nsec): min=17532, max=93117, avg=36300.44, stdev=6939.55
    clat percentiles (nsec):
     |  1.00th=[17024],  5.00th=[18560], 10.00th=[22400], 20.00th=[25216],
     | 30.00th=[26240], 40.00th=[27264], 50.00th=[28288], 60.00th=[31104],
     | 70.00th=[33536], 80.00th=[36096], 90.00th=[39680], 95.00th=[44288],
     | 99.00th=[51456], 99.50th=[55040], 99.90th=[62720], 99.95th=[65280],
     | 99.99th=[71168]
  write: IOPS=76.2k, BW=298MiB/s (312MB/s)(768MiB/2579msec); 6 zone resets
    slat (nsec): min=3557, max=75212, avg=10193.65, stdev=4898.93
    clat (usec): min=13, max=2587, avg=39.71, stdev=15.45
     lat (usec): min=18, max=2593, avg=50.02, stdev=15.59
    clat percentiles (usec):
     |  1.00th=[   22],  5.00th=[   25], 10.00th=[   28], 20.00th=[   32],
     | 30.00th=[   35], 40.00th=[   36], 50.00th=[   39], 60.00th=[   42],
     | 70.00th=[   44], 80.00th=[   47], 90.00th=[   53], 95.00th=[   59],
     | 99.00th=[   73], 99.50th=[   79], 99.90th=[   95], 99.95th=[  103],
     | 99.99th=[  123]
   bw (  KiB/s): min=49520, max=320424, per=85.97%, avg=262144.00, stdev=108036.16, samples=6
   iops        : min=12380, max=80106, avg=65536.00, stdev=27009.04, samples=6
  lat (usec)   : 20=3.61%, 50=89.00%, 100=7.37%, 250=0.03%
  lat (msec)   : 4=0.01%
  cpu          : usr=21.05%, sys=31.87%, ctx=426449, majf=0, minf=4626
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=196608,196608,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=417MiB/s (438MB/s), 417MiB/s-417MiB/s (438MB/s-438MB/s), io=768MiB (805MB), run=1840-1840msec
  WRITE: bw=298MiB/s (312MB/s), 298MiB/s-298MiB/s (312MB/s-312MB/s), io=768MiB (805MB), run=2579-2579msec

Disk stats (read/write):
  nvme1n1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
nvme (nvme-5.18) # 
nvme (nvme-5.18) # ./delete.sh 
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 1 controller(s)

real	0m0.340s
user	0m0.000s
sys	0m0.006s
+ rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
+ rmdir /sys/kernel/config/nvmet/ports/1
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn
+ rmdir config/nullb/nullb1
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
nvme (nvme-5.18) # 


2. Tests with CONFIG_BLK_DEV_ZONED disabled :-
-----------------------------------------------------------------------

nvme (nvme-5.18) # grep -i zoned .config
# CONFIG_BLK_DEV_ZONED is not set
nvme (nvme-5.18) # git log -3
nvme (nvme-5.18) #  git log -3

commit 0d77072e26fa074da0c36e423a5802fd18ef976d (HEAD -> nvme-5.18)
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Fri Apr 8 16:45:18 2022 -0700

    nvme-core: mark internal passthru req REQ_QUIET
    
    Mark internal passthru requests quiet in the submission path with
    RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
    path, if nvme request is resulted in the error and request is marked
    RQF_QUIET then don't log the error with nvme_error_log().
    
    Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>

commit d4df054957f3cbd37601d65d8e7460aaaee8733d
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Fri Apr 8 15:50:10 2022 -0700

    nvme-core: don't check non-mdts for disc ctrl
    
    Don't check the non mdts limits for the discovery controller as we
    don't support I/O command set, this also masks the following warning
    reported by the nvme_log_error() when running blktest nvme/002: -
    
    [ 2005.155946] run blktests nvme/002 at 2022-04-09 16:57:47
    [ 2005.192223] loop: module loaded
    [ 2005.196429] nvmet: adding nsid 1 to subsystem blktests-subsystem-0
    [ 2005.200334] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
    
    <------------------------------SNIP---------------------------------->
    
    [ 2008.958108] nvmet: adding nsid 1 to subsystem blktests-subsystem-997
    [ 2008.962082] nvmet: adding nsid 1 to subsystem blktests-subsystem-998
    [ 2008.966102] nvmet: adding nsid 1 to subsystem blktests-subsystem-999
    [ 2008.973132] nvmet: creating discovery controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN testhostnqn.
    *[ 2008.973196] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR*
    [ 2008.974595] nvme nvme1: new ctrl: "nqn.2014-08.org.nvmexpress.discovery"
    [ 2009.103248] nvme nvme1: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
    
    Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>

commit 18aac654ff824a4b169e51d72b674245a7a1b79c
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Thu Apr 7 12:11:37 2022 -0700

    nvmet: handle admin default command set identifier
    
    The NVMeOF in kernel host when configuring non-mdts limits uses
    idnetify controller cns value to NVME_ID_CNS_CS_CTRL and csi value to
    NVME_CSI_NVM. In target code we only handle case for NVME_ID_CNS_CS_CTRL
    and NVME_CSI_ZNS when CONFIG_BLK_DEV_ZONED is set.
    
    Handle missing case for CONFIG_BLK_DEV_ZONED and !CONFIG_BLK_DEV_ZONED
    so it can handle NVME_ID_CNS_CS_CTRL with NVME_CSI_NVM.
    
    Use this opportunity to add default for the NVME_ID_CNS_CTRL case which
    makes code uniform for all the cases in the
    nvmet_execute_identify_cns_cs_ctrl_nvm().
    
    Also, rename nvmet_execute_identify_ctrl() to
                 nvmet_execute_identify_cns_cs_ctrl_nvm() and
    nvmet_execute_identify_cns_cs_ctrl() to
    nvmet_execute_identify_cns_cs_ctrl_zns().
    
    This also masks the following warning reported by the nvme_log_error()
    when running blktest nvme/012:-
    
    [ 2131.702140] run blktests nvme/012 at 2022-04-09 16:59:53
    [ 2131.722144] loop0: detected capacity change from 0 to 2097152
    [ 2131.730586] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
    [ 2131.738826] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN testhostnqn.
    *[ 2131.738911] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR*
    [ 2131.740540] nvme nvme1: creating 48 I/O queues.
    [ 2131.743925] nvme nvme1: new ctrl: "blktests-subsystem-1"
    [ 2132.790058] XFS (nvme1n1): Mounting V5 Filesystem
    [ 2132.793667] XFS (nvme1n1): Ending clean mount
    [ 2132.794030] xfs filesystem being mounted at /mnt/blktests supports timestamps until 2038 (0x7fffffff)
    [ 2142.471812] XFS (nvme1n1): Unmounting Filesystem
    [ 2142.492566] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
    
    Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
nvme (nvme-5.18) # ./compile_nvme.sh 
+ umount /mnt/nvme0n1
+ clear_dmesg
./compile_nvme.sh: line 3: clear_dmesg: command not found
umount: /mnt/nvme0n1: no mount point specified.
+ ./delete.sh
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 0 controller(s)

real	0m0.003s
user	0m0.000s
sys	0m0.002s
+ rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
+ rmdir /sys/kernel/config/nvmet/ports/1
rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file or directory
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
./delete.sh: line 14: /sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such file or directory
+ rmdir 'config/nullb/nullb*'
rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
+ modprobe -r nvme-fabrics
+ modprobe -r nvme_loop
+ modprobe -r nvmet
+ modprobe -r nvme
+ sleep 1
cdblk+ modprobe -r nvme-core
+ lsmod
+ grep nvme
+ sleep 1
t+ git diff
+ sleep 1
++ nproc
+ make -j 48 M=drivers/nvme/target/ clean
++ nproc
+ make -j 48 M=drivers/nvme/ modules
  CC [M]  drivers/nvme/target/core.o
  CC [M]  drivers/nvme/target/configfs.o
  CC [M]  drivers/nvme/target/admin-cmd.o
  CC [M]  drivers/nvme/target/fabrics-cmd.o
  CC [M]  drivers/nvme/target/discovery.o
  CC [M]  drivers/nvme/target/io-cmd-file.o
  CC [M]  drivers/nvme/target/io-cmd-bdev.o
  CC [M]  drivers/nvme/target/passthru.o
  CC [M]  drivers/nvme/target/trace.o
  CC [M]  drivers/nvme/target/loop.o
  CC [M]  drivers/nvme/target/rdma.o
  CC [M]  drivers/nvme/target/fc.o
  CC [M]  drivers/nvme/target/fcloop.o
  CC [M]  drivers/nvme/target/tcp.o
  LD [M]  drivers/nvme/target/nvme-loop.o
  LD [M]  drivers/nvme/target/nvme-fcloop.o
  LD [M]  drivers/nvme/target/nvmet.o
  LD [M]  drivers/nvme/target/nvmet-fc.o
  LD [M]  drivers/nvme/target/nvmet-tcp.o
  LD [M]  drivers/nvme/target/nvmet-rdma.o
  MODPOST drivers/nvme/Module.symvers
  CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
  CC [M]  drivers/nvme/target/nvme-loop.mod.o
  CC [M]  drivers/nvme/target/nvmet-fc.mod.o
  CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
  CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
  CC [M]  drivers/nvme/target/nvmet.mod.o
  LD [M]  drivers/nvme/target/nvmet-fc.ko
  LD [M]  drivers/nvme/target/nvme-fcloop.ko
  LD [M]  drivers/nvme/target/nvme-loop.ko
  LD [M]  drivers/nvme/target/nvmet-tcp.ko
  LD [M]  drivers/nvme/target/nvmet.ko
  LD [M]  drivers/nvme/target/nvmet-rdma.ko
+ HOST=drivers/nvme/host
+ TARGET=drivers/nvme/target
++ uname -r
+ HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
++ uname -r
+ TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
+ cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
+ cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
+ ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
total 6.1M
-rw-r--r--. 1 root root 2.4M Apr  9 16:15 nvme-core.ko
-rw-r--r--. 1 root root 426K Apr  9 16:15 nvme-fabrics.ko
-rw-r--r--. 1 root root 924K Apr  9 16:15 nvme-fc.ko
-rw-r--r--. 1 root root 714K Apr  9 16:15 nvme.ko
-rw-r--r--. 1 root root 856K Apr  9 16:15 nvme-rdma.ko
-rw-r--r--. 1 root root 799K Apr  9 16:15 nvme-tcp.ko

/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
total 6.0M
-rw-r--r--. 1 root root 475K Apr  9 16:15 nvme-fcloop.ko
-rw-r--r--. 1 root root 418K Apr  9 16:15 nvme-loop.ko
-rw-r--r--. 1 root root 733K Apr  9 16:15 nvmet-fc.ko
-rw-r--r--. 1 root root 3.0M Apr  9 16:15 nvmet.ko
-rw-r--r--. 1 root root 822K Apr  9 16:15 nvmet-rdma.ko
-rw-r--r--. 1 root root 670K Apr  9 16:15 nvmet-tcp.ko
+ sync
+ sync
+ sync
+ modprobe nvme
+ echo 'Press enter to continue ...'
Press enter to continue ...
+ read next


2.1 nvme-admin-passthru error injection logging error mesaages.
nvme (nvme-5.18) # vim error_inject.sh 
nvme (nvme-5.18) # sh error_inject.sh 
++ NN=1
++ NQN=testnqn
++ let NR_DEVICES=NN+1
++ modprobe -r null_blk
++ modprobe -r nvme
++ modprobe null_blk nr_devices=0
++ modprobe nvme
++ modprobe nvme-fabrics
++ modprobe nvmet
++ modprobe nvme-loop
++ dmesg -c
++ sleep 2
++ tree /sys/kernel/config
/sys/kernel/config
├── nullb
│   └── features
└── nvmet
    ├── hosts
    ├── ports
    └── subsystems

5 directories, 1 file
++ sleep 1
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
+++ shuf -i 1-1 -n 1
++ for i in `shuf -i  1-$NN -n $NN`
++ mkdir config/nullb/nullb1
++ echo 1
++ echo 4096
++ echo 2048
++ echo 1
+++ cat config/nullb/nullb1/index
++ IDX=0
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
++ echo ' ####### /dev/nullb0'
 ####### /dev/nullb0
++ echo -n /dev/nullb0
++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
/dev/nullb0
++ echo 1
++ dmesg -c
[  413.375009] nvme nvme0: 48/0/0 default/read/poll queues
[  416.369926] nvmet: adding nsid 1 to subsystem testnqn
++ mkdir /sys/kernel/config/nvmet/ports/1/
++ echo -n loop
++ echo -n 1
++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
++ echo transport=loop,nqn=testnqn
++ sleep 1
++ mount
++ grep nvme
++ column -t
++ dmesg -c
[  416.376943] nvmet: creating nvm controller 1 for subsystem testnqn for NQN nqn.2014-08.org.nvmexpress:uuid:6b4a42a3-6ddc-44fc-bcfa-0cab47623a1f.
[  416.377040] nvme nvme1: creating 48 I/O queues.
[  416.380887] nvme nvme1: new ctrl: "testnqn"
Node SN Model Namespace Usage Format FW Rev 
nvme1n1 7c695f8c7cfff9e62d0f Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0 
NVMe status: Access Denied: Access to the namespace and/or LBA range is denied due to lack of access rights(0x4286)
Node SN Model Namespace Usage Format FW Rev 
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0 
[  417.393147] FAULT_INJECTION: forcing a failure.
               name fault_inject, interval 1, probability 100, space 0, times 1000
[  417.393155] CPU: 25 PID: 10499 Comm: kworker/25:9 Tainted: G           OE     5.17.0-rc2nvme+ #71
[  417.393159] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[  417.393161] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[  417.393170] Call Trace:
[  417.393188]  <TASK>
[  417.393190]  dump_stack_lvl+0x48/0x5e
[  417.393196]  should_fail.cold+0x32/0x37
[  417.393201]  nvme_should_fail+0x38/0x90 [nvme_core]
[  417.393213]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[  417.393217]  nvmet_req_complete+0x11/0x50 [nvmet]
[  417.393225]  process_one_work+0x1af/0x380
[  417.393229]  worker_thread+0x50/0x3a0
[  417.393232]  ? rescuer_thread+0x370/0x370
[  417.393234]  kthread+0xe7/0x110
[  417.393238]  ? kthread_complete_and_exit+0x20/0x20
[  417.393241]  ret_from_fork+0x22/0x30
[  417.393247]  </TASK>
[  417.393254] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR 
[  417.397215] FAULT_INJECTION: forcing a failure.
               name fault_inject, interval 1, probability 100, space 0, times 999
[  417.397221] CPU: 26 PID: 10947 Comm: kworker/26:7 Tainted: G           OE     5.17.0-rc2nvme+ #71
[  417.397225] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[  417.397227] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[  417.397235] Call Trace:
[  417.397238]  <TASK>
[  417.397241]  dump_stack_lvl+0x48/0x5e
[  417.397247]  should_fail.cold+0x32/0x37
[  417.397252]  nvme_should_fail+0x38/0x90 [nvme_core]
[  417.397264]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[  417.397268]  nvmet_req_complete+0x11/0x50 [nvmet]
[  417.397276]  process_one_work+0x1af/0x380
[  417.397281]  worker_thread+0x50/0x3a0
[  417.397283]  ? rescuer_thread+0x370/0x370
[  417.397296]  kthread+0xe7/0x110
[  417.397300]  ? kthread_complete_and_exit+0x20/0x20
[  417.397303]  ret_from_fork+0x22/0x30
[  417.397310]  </TASK>
[  417.397316] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR 
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 1 controller(s)

real	0m0.357s
user	0m0.000s
sys	0m0.006s
+ rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
+ rmdir /sys/kernel/config/nvmet/ports/1
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn
+ rmdir config/nullb/nullb1
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
nvme (nvme-5.18) # 
nvme (nvme-5.18) # 

2.2 blktests results.
-----------------------------------------------------------------------
nvme (nvme-5.18) # cdblktests 
blktests (master) # ./check nvme 
nvme/002 (create many subsystems and test discovery)         [passed]
    runtime  17.312s  ...  16.744s
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
    runtime  10.089s  ...  10.091s
nvme/0nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  1.433s  ...  1.445s
nvme/005 (reset local loopback target)                       [passed]
    runtime  6.795s  ...  6.802s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.063s  ...  0.089s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.042s  ...  0.040s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  1.457s  ...  1.451s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  1.420s  ...  1.423s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  7.906s  ...  7.961s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  76.979s  ...  75.022s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  12.301s  ...  11.106s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  80.436s  ...  77.184s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  8.850s  ...  8.883s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  8.669s  ...  8.710s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [passed]
    runtime  10.842s  ...  11.078s
nvme/017 (create/delete many file-ns and test discovery)     [passed]
    runtime  11.080s  ...  11.047s
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  1.402s  ...  1.440s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  1.428s  ...  1.467s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  1.412s  ...  1.419s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  1.401s  ...  1.405s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  6.736s  ...  6.733s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  1.455s  ...  1.437s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  1.406s  ...  1.407s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  1.410s  ...  1.430s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  1.406s  ...  1.410s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  1.422s  ...  1.436s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  1.433s  ...  1.405s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  1.536s  ...  1.544s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
    runtime  0.192s  ...  0.183s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
    runtime  53.842s  ...  53.901s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
    runtime  0.019s  ...  0.018s
blktests (master) # 
blktests (master) # 
nvme (nvme-5.18) # 
nvme (nvme-5.18) # 
nvme (nvme-5.18) # 

2.3 Try and create NVMeOF controller with ZBD backend fail grececully.
-----------------------------------------------------------------------
nvme (nvme-5.18) # 
nvme (nvme-5.18) # ./zbdev.sh 1 
++ SUBSYS=/sys/kernel/config/nvmet/subsystems
++ PORT=/sys/kernel/config/nvmet/ports
++ NQN=testnqn
++ NN=1
++ main
++ unload
++ modprobe -r null_blk
++ modprobe null_blk zoned=1 nr_devices=0
++ modprobe nvme
++ modprobe nvme-fabrics
++ modprobe nvmet
++ modprobe nvme-loop
++ dmesg -c
++ sleep 2
++ make_subsys
++ tree /sys/kernel/config
/sys/kernel/config
├── nullb
│   └── features
└── nvmet
    ├── hosts
    ├── ports
    └── subsystems

5 directories, 1 file
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
++ echo -n 1
++ make_port
++ mkdir /sys/kernel/config/nvmet/ports/1/
++ echo -n loop
++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
++ sleep 1
++ connect_host
++ echo transport=loop,nqn=testnqn
++ make_ns
+++ shuf -i 1-1 -n 1
++ for i in `shuf -i  1-$NN -n $NN`
++ mkdir config/nullb/nullb1
++ echo 1
++ echo 4096
++ echo 1024
++ echo 128
++ echo 1
++ echo 1
./zbdev.sh: line 39: echo: write error: Invalid argument
+++ cat config/nullb/nullb1/index
++ IDX=0
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
++ echo ' ####### /dev/nullb0'
 ####### /dev/nullb0
++ echo -n /dev/nullb0
++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
/dev/nullb0
++ echo /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/enable
/sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/enable
++ echo 1
./zbdev.sh: line 49: echo: write error: No such file or directory
++ '[' 1 ']'
+++ wc -l
+++ ls -l /dev/nvme1
++ cnt=1
++ echo 1
1
++ '[' 1 -gt 1 ']'
++ sleep 1
++ dmesg -c
[  424.749844] nvme nvme0: 48/0/0 default/read/poll queues
[  427.740698] nvmet: creating nvm controller 1 for subsystem testnqn for NQN nqn.2014-08.org.nvmexpress:uuid:477f34ee-8fbe-47c2-858e-0312fd3b85b5.
[  427.740788] nvme nvme1: creating 48 I/O queues.
[  427.745480] nvme nvme1: new ctrl: "testnqn"
[  427.748984] null_blk: CONFIG_BLK_DEV_ZONED not enabled
[  427.752564] nvmet: adding nsid 1 to subsystem testnqn
[  427.754146] nvmet: failed to open block device /dev/nullb0: (-2)
++ '[' 1 ']'
+++ wc -l
+++ ls -l /dev/nvme1
++ cnt=1
++ echo 1
1
++ '[' 1 -gt 1 ']'
++ sleep 1
++ dmesg -c
++ '[' 1 ']'
+++ wc -l
+++ ls -l /dev/nvme1
++ cnt=1
++ echo 1
1
++ '[' 1 -gt 1 ']'
++ sleep 1
++ dmesg -c
++ '[' 1 ']'
+++ wc -l
+++ ls -l /dev/nvme1
++ cnt=1
++ echo 1
1
++ '[' 1 -gt 1 ']'
++ sleep 1
^Cnvme (nvme-5.18) # ./delete.sh 
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 1 controller(s)

real	0m1.041s
user	0m0.000s
sys	0m0.004s
+ rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
+ rmdir /sys/kernel/config/nvmet/ports/1
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn
+ rmdir config/nullb/nullb1
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
nvme (nvme-5.18) # 


-- 
2.29.0



^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/3] nvmet: handle admin default command set identifier
  2022-04-11  3:12 [PATCH 0/3] nvme: fix internal passthru error messages Chaitanya Kulkarni
@ 2022-04-11  3:12 ` Chaitanya Kulkarni
  2022-04-11  6:18   ` Christoph Hellwig
  2022-04-11  3:12 ` [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl Chaitanya Kulkarni
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
  2 siblings, 1 reply; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11  3:12 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, sagi, kbusch, Chaitanya Kulkarni

The NVMeOF in kernel host when configuring non-mdts limits uses
idnetify controller cns value to NVME_ID_CNS_CS_CTRL and csi value to
NVME_CSI_NVM. In target code we only handle case for NVME_ID_CNS_CS_CTRL
and NVME_CSI_ZNS when CONFIG_BLK_DEV_ZONED is set.

Handle missing case for CONFIG_BLK_DEV_ZONED and !CONFIG_BLK_DEV_ZONED
so it can handle NVME_ID_CNS_CS_CTRL with NVME_CSI_NVM.

Use this opportunity to add default for the NVME_ID_CNS_CTRL case which
makes code uniform for all the cases in the
nvmet_execute_identify_cns_cs_ctrl_nvm().

Also, rename nvmet_execute_identify_ctrl() to
             nvmet_execute_identify_cns_cs_ctrl_nvm() and
nvmet_execute_identify_cns_cs_ctrl() to
nvmet_execute_identify_cns_cs_ctrl_zns().

This also masks the following warning reported by the nvme_log_error()
when running blktest nvme/012:-

[ 2131.702140] run blktests nvme/012 at 2022-04-09 16:59:53
[ 2131.722144] loop0: detected capacity change from 0 to 2097152
[ 2131.730586] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[ 2131.738826] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN testhostnqn.
*[ 2131.738911] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR*
[ 2131.740540] nvme nvme1: creating 48 I/O queues.
[ 2131.743925] nvme nvme1: new ctrl: "blktests-subsystem-1"
[ 2132.790058] XFS (nvme1n1): Mounting V5 Filesystem
[ 2132.793667] XFS (nvme1n1): Ending clean mount
[ 2132.794030] xfs filesystem being mounted at /mnt/blktests supports timestamps until 2038 (0x7fffffff)
[ 2142.471812] XFS (nvme1n1): Unmounting Filesystem
[ 2142.492566] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/target/admin-cmd.c | 22 ++++++++++++++--------
 drivers/nvme/target/nvmet.h     |  2 +-
 drivers/nvme/target/zns.c       |  2 +-
 3 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
index 397daaf51f1b..428303f1acca 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -342,7 +342,7 @@ static void nvmet_execute_get_log_page(struct nvmet_req *req)
 	nvmet_req_complete(req, NVME_SC_INVALID_FIELD | NVME_SC_DNR);
 }
 
-static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
+static void nvmet_execute_identify_cns_cs_ctrl_nvm(struct nvmet_req *req)
 {
 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
 	struct nvmet_subsys *subsys = ctrl->subsys;
@@ -710,17 +710,23 @@ static void nvmet_execute_identify(struct nvmet_req *req)
 	case NVME_ID_CNS_CTRL:
 		switch (req->cmd->identify.csi) {
 		case NVME_CSI_NVM:
-			return nvmet_execute_identify_ctrl(req);
+			return nvmet_execute_identify_cns_cs_ctrl_nvm(req);
+		default:
+			break;
 		}
 		break;
 	case NVME_ID_CNS_CS_CTRL:
-		if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
-			switch (req->cmd->identify.csi) {
-			case NVME_CSI_ZNS:
-				return nvmet_execute_identify_cns_cs_ctrl(req);
-			default:
-				break;
+		switch (req->cmd->identify.csi) {
+		case NVME_CSI_NVM:
+			return nvmet_execute_identify_cns_cs_ctrl_nvm(req);
+		case NVME_CSI_ZNS:
+			if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
+				nvmet_execute_identify_cns_cs_ctrl_zns(req);
+				return;
 			}
+			break;
+		default:
+			break;
 		}
 		break;
 	case NVME_ID_CNS_NS_ACTIVE_LIST:
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 69818752a33a..fdb6956314ad 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -547,7 +547,7 @@ bool nvmet_ns_revalidate(struct nvmet_ns *ns);
 u16 blk_to_nvme_status(struct nvmet_req *req, blk_status_t blk_sts);
 
 bool nvmet_bdev_zns_enable(struct nvmet_ns *ns);
-void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req);
+void nvmet_execute_identify_cns_cs_ctrl_zns(struct nvmet_req *req);
 void nvmet_execute_identify_cns_cs_ns(struct nvmet_req *req);
 void nvmet_bdev_execute_zone_mgmt_recv(struct nvmet_req *req);
 void nvmet_bdev_execute_zone_mgmt_send(struct nvmet_req *req);
diff --git a/drivers/nvme/target/zns.c b/drivers/nvme/target/zns.c
index e34718b09550..5003d48b6630 100644
--- a/drivers/nvme/target/zns.c
+++ b/drivers/nvme/target/zns.c
@@ -71,7 +71,7 @@ bool nvmet_bdev_zns_enable(struct nvmet_ns *ns)
 	return true;
 }
 
-void nvmet_execute_identify_cns_cs_ctrl(struct nvmet_req *req)
+void nvmet_execute_identify_cns_cs_ctrl_zns(struct nvmet_req *req)
 {
 	u8 zasl = req->sq->ctrl->subsys->zasl;
 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11  3:12 [PATCH 0/3] nvme: fix internal passthru error messages Chaitanya Kulkarni
  2022-04-11  3:12 ` [PATCH 1/3] nvmet: handle admin default command set identifier Chaitanya Kulkarni
@ 2022-04-11  3:12 ` Chaitanya Kulkarni
  2022-04-11  6:13   ` Christoph Hellwig
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
  2 siblings, 1 reply; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11  3:12 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, sagi, kbusch, Chaitanya Kulkarni

Don't check the non mdts limits for the discovery controller as we
don't support I/O command set, this also masks the following warning
reported by the nvme_log_error() when running blktest nvme/002: -

[ 2005.155946] run blktests nvme/002 at 2022-04-09 16:57:47
[ 2005.192223] loop: module loaded
[ 2005.196429] nvmet: adding nsid 1 to subsystem blktests-subsystem-0
[ 2005.200334] nvmet: adding nsid 1 to subsystem blktests-subsystem-1

<------------------------------SNIP---------------------------------->

[ 2008.958108] nvmet: adding nsid 1 to subsystem blktests-subsystem-997
[ 2008.962082] nvmet: adding nsid 1 to subsystem blktests-subsystem-998
[ 2008.966102] nvmet: adding nsid 1 to subsystem blktests-subsystem-999
[ 2008.973132] nvmet: creating discovery controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN testhostnqn.
*[ 2008.973196] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR*
[ 2008.974595] nvme nvme1: new ctrl: "nqn.2014-08.org.nvmexpress.discovery"
[ 2009.103248] nvme nvme1: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f204c6f78b5b..449378a96a9f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2876,6 +2876,10 @@ static int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl)
 	struct nvme_id_ctrl_nvm *id;
 	int ret;
 
+	/* Discovery controller doesn't support non mdts limits */
+	if (nvme_discovery_ctrl(ctrl))
+		return 0;
+
 	if (ctrl->oncs & NVME_CTRL_ONCS_DSM) {
 		ctrl->max_discard_sectors = UINT_MAX;
 		ctrl->max_discard_segments = NVME_DSM_MAX_RANGES;
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 [PATCH 0/3] nvme: fix internal passthru error messages Chaitanya Kulkarni
  2022-04-11  3:12 ` [PATCH 1/3] nvmet: handle admin default command set identifier Chaitanya Kulkarni
  2022-04-11  3:12 ` [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl Chaitanya Kulkarni
@ 2022-04-11  3:12 ` Chaitanya Kulkarni
  2022-04-11  6:12   ` Christoph Hellwig
                     ` (6 more replies)
  2 siblings, 7 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11  3:12 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, sagi, kbusch, Chaitanya Kulkarni

Mark internal passthru requests quiet in the submission path with
RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
path, if nvme request is resulted in the error and request is marked
RQF_QUIET then don't log the error with nvme_error_log().

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 449378a96a9f..12302be83a6c 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
 {
 	blk_status_t status = nvme_error_status(nvme_req(req)->status);
 
-	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
+	if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
 		nvme_log_error(req);
 	nvme_end_req_zoned(req);
 	nvme_trace_bio_complete(req);
@@ -1100,6 +1100,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 			goto out;
 	}
 
+	req->rq_flags |= RQF_QUIET;
 	ret = nvme_execute_rq(req, at_head);
 	if (result && ret >= 0)
 		*result = nvme_req(req)->result;
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
@ 2022-04-11  6:12   ` Christoph Hellwig
  2022-04-11 10:48     ` Chaitanya Kulkarni
  2022-04-11 10:28   ` Sagi Grimberg
                     ` (5 subsequent siblings)
  6 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2022-04-11  6:12 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi, kbusch

On Sun, Apr 10, 2022 at 08:12:49PM -0700, Chaitanya Kulkarni wrote:
> Mark internal passthru requests quiet in the submission path with
> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
> path, if nvme request is resulted in the error and request is marked
> RQF_QUIET then don't log the error with nvme_error_log().

This misses out on a few other internal submissions like the PCIe
queue deletion or keep alive.  Probably not much of an issue, but I'll
ammend the commit log when applying it.  I'll wait a bit for more
feedback but plan to send this out later this week.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11  3:12 ` [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl Chaitanya Kulkarni
@ 2022-04-11  6:13   ` Christoph Hellwig
  2022-04-11 10:27     ` Sagi Grimberg
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2022-04-11  6:13 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi, kbusch

On Sun, Apr 10, 2022 at 08:12:48PM -0700, Chaitanya Kulkarni wrote:
> Don't check the non mdts limits for the discovery controller as we
> don't support I/O command set, this also masks the following warning
> reported by the nvme_log_error() when running blktest nvme/002: -

I think we can skip the whole call to nvme_init_non_mdts_limits in this
case.  And we should probably key it off having I/O queues to also
cover administrative controllers.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/3] nvmet: handle admin default command set identifier
  2022-04-11  3:12 ` [PATCH 1/3] nvmet: handle admin default command set identifier Chaitanya Kulkarni
@ 2022-04-11  6:18   ` Christoph Hellwig
  2022-05-10  6:33     ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2022-04-11  6:18 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi, kbusch

On Sun, Apr 10, 2022 at 08:12:47PM -0700, Chaitanya Kulkarni wrote:
> -static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
> +static void nvmet_execute_identify_cns_cs_ctrl_nvm(struct nvmet_req *req)

No, this does not work.

nvmet_execute_identify_ctrl handles the
"Identify Controller data structure" from the base spec, CNS 01h, no CSI.

The "I/O Command Set Specific Identify Controller data structure" from
the NVM Command set spec (CNS 06h, CSI 00h) is an entirely different
data structure and needs separate handling.

I'd also drop the cns here and in the other names, every identify command
has a CNS value associated with it and these names are already getting
rather long.

> +			if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
> +				nvmet_execute_identify_cns_cs_ctrl_zns(req);
> +				return;
>  			}
> +			break;

Nit: I'd invert the IS_ENABLED check.  This saves one line, and one
level of indentation and just reads easier as well.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11  6:13   ` Christoph Hellwig
@ 2022-04-11 10:27     ` Sagi Grimberg
  2022-04-11 10:49       ` Chaitanya Kulkarni
  0 siblings, 1 reply; 32+ messages in thread
From: Sagi Grimberg @ 2022-04-11 10:27 UTC (permalink / raw)
  To: Christoph Hellwig, Chaitanya Kulkarni; +Cc: linux-nvme, kbusch


>> Don't check the non mdts limits for the discovery controller as we
>> don't support I/O command set, this also masks the following warning
>> reported by the nvme_log_error() when running blktest nvme/002: -
> 
> I think we can skip the whole call to nvme_init_non_mdts_limits in this
> case.  And we should probably key it off having I/O queues to also
> cover administrative controllers.

I'd also move this check to the call-site instead of where it is now.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
  2022-04-11  6:12   ` Christoph Hellwig
@ 2022-04-11 10:28   ` Sagi Grimberg
  2022-04-11 10:49     ` Chaitanya Kulkarni
  2022-04-13 12:07   ` Yi Zhang
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 32+ messages in thread
From: Sagi Grimberg @ 2022-04-11 10:28 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: hch, kbusch


> Mark internal passthru requests quiet in the submission path with
> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
> path, if nvme request is resulted in the error and request is marked
> RQF_QUIET then don't log the error with nvme_error_log().
> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>   drivers/nvme/host/core.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 449378a96a9f..12302be83a6c 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>   {
>   	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>   
> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +	if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>   		nvme_log_error(req);
>   	nvme_end_req_zoned(req);
>   	nvme_trace_bio_complete(req);
> @@ -1100,6 +1100,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
>   			goto out;
>   	}
>   
> +	req->rq_flags |= RQF_QUIET;
>   	ret = nvme_execute_rq(req, at_head);
>   	if (result && ret >= 0)
>   		*result = nvme_req(req)->result;

Other than what Christoph pointed out, which can be
added incrementally.

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  6:12   ` Christoph Hellwig
@ 2022-04-11 10:48     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11 10:48 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: linux-nvme, sagi, kbusch, Chaitanya Kulkarni

On 4/10/22 23:12, Christoph Hellwig wrote:
> On Sun, Apr 10, 2022 at 08:12:49PM -0700, Chaitanya Kulkarni wrote:
>> Mark internal passthru requests quiet in the submission path with
>> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
>> path, if nvme request is resulted in the error and request is marked
>> RQF_QUIET then don't log the error with nvme_error_log().
> 
> This misses out on a few other internal submissions like the PCIe
> queue deletion or keep alive.  Probably not much of an issue, but I'll
> ammend the commit log when applying it.  I'll wait a bit for more
> feedback but plan to send this out later this week.
> 

Yes it did miss that, actually I've a WIP series to centralize all 
passthru request submission with cleanup so that we have only one
interface for internal passthru sync-async and also cleanup the code
for __nvme_submit_sync_cmd(), will send that out soon.

This was just a quick fix to mainly fix reports we are getting.

-ck



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11 10:28   ` Sagi Grimberg
@ 2022-04-11 10:49     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11 10:49 UTC (permalink / raw)
  To: Sagi Grimberg, Chaitanya Kulkarni, linux-nvme; +Cc: hch, kbusch


> 
> Other than what Christoph pointed out, which can be
> added incrementally.
> 
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

I'll send out an incremental one, thanks for the review.

-ck

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 10:27     ` Sagi Grimberg
@ 2022-04-11 10:49       ` Chaitanya Kulkarni
  2022-04-11 12:09         ` Chaitanya Kulkarni
  0 siblings, 1 reply; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11 10:49 UTC (permalink / raw)
  To: Sagi Grimberg, Christoph Hellwig, Chaitanya Kulkarni; +Cc: linux-nvme, kbusch

>> I think we can skip the whole call to nvme_init_non_mdts_limits in this
>> case.  And we should probably key it off having I/O queues to also
>> cover administrative controllers.
> 
> I'd also move this check to the call-site instead of where it is now.

Okay, I'll add it to V2.

-ck



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 10:49       ` Chaitanya Kulkarni
@ 2022-04-11 12:09         ` Chaitanya Kulkarni
  2022-04-11 12:12           ` Christoph Hellwig
  0 siblings, 1 reply; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11 12:09 UTC (permalink / raw)
  To: Sagi Grimberg, Christoph Hellwig; +Cc: linux-nvme, kbusch

On 4/11/22 03:49, Chaitanya Kulkarni wrote:
>>> I think we can skip the whole call to nvme_init_non_mdts_limits in this
>>> case.  And we should probably key it off having I/O queues to also
>>> cover administrative controllers.
>>
>> I'd also move this check to the call-site instead of where it is now.
> 
> Okay, I'll add it to V2.
> 
> -ck
> 
> 

Instead of checking for the I/O queues, we already set n cache the 
mandatory field cntrltype in the nvme_ctrl, how about we use that ?

nvme (nvme-5.18) # cat 
0001-nvme-core-check-non-mdts-only-for-I-O-ctrl.patch
 From 93914f9ed43e7974294dc815ce6ed68c99044f16 Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <kch@nvidia.com>
Date: Fri, 8 Apr 2022 15:50:10 -0700
Subject: [PATCH] nvme-core: check non-mdts only for I/O ctrl

Validate the controller type is NVME_CTRL_IO before checking the
non mdts limits, as we don't support I/O command set for discovery and
admin ctrl, this also masks the following warning reported by the
nvme_log_error() when running blktest nvme/002: -

[ 2005.155946] run blktests nvme/002 at 2022-04-09 16:57:47
[ 2005.192223] loop: module loaded
[ 2005.196429] nvmet: adding nsid 1 to subsystem blktests-subsystem-0
[ 2005.200334] nvmet: adding nsid 1 to subsystem blktests-subsystem-1

<------------------------------SNIP---------------------------------->

[ 2008.958108] nvmet: adding nsid 1 to subsystem blktests-subsystem-997
[ 2008.962082] nvmet: adding nsid 1 to subsystem blktests-subsystem-998
[ 2008.966102] nvmet: adding nsid 1 to subsystem blktests-subsystem-999
[ 2008.973132] nvmet: creating discovery controller 1 for subsystem 
nqn.2014-08.org.nvmexpress.discovery for NQN testhostnqn.
*[ 2008.973196] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 
/ sc 0x2) MORE DNR*
[ 2008.974595] nvme nvme1: new ctrl: "nqn.2014-08.org.nvmexpress.discovery"
[ 2009.103248] nvme nvme1: Removing ctrl: NQN 
"nqn.2014-08.org.nvmexpress.discovery"

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
  drivers/nvme/host/core.c | 8 +++++---
  1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e8d6a1e52083..913eb4cec048 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3101,9 +3101,11 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
  	if (ret)
  		return ret;

-	ret = nvme_init_non_mdts_limits(ctrl);
-	if (ret < 0)
-		return ret;
+	if (ctrl->cntrltype == NVME_CTRL_IO) {
+		ret = nvme_init_non_mdts_limits(ctrl);
+		if (ret < 0)
+			return ret;
+	}

  	ret = nvme_configure_apst(ctrl);
  	if (ret < 0)
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 12:09         ` Chaitanya Kulkarni
@ 2022-04-11 12:12           ` Christoph Hellwig
  2022-04-11 12:40             ` Chaitanya Kulkarni
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2022-04-11 12:12 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Sagi Grimberg, Christoph Hellwig, linux-nvme, kbusch

On Mon, Apr 11, 2022 at 12:09:31PM +0000, Chaitanya Kulkarni wrote:
> -	ret = nvme_init_non_mdts_limits(ctrl);
> -	if (ret < 0)
> -		return ret;
> +	if (ctrl->cntrltype == NVME_CTRL_IO) {
> +		ret = nvme_init_non_mdts_limits(ctrl);
> +		if (ret < 0)
> +			return ret;
> +	}

This looks better, but I don't think it will actually work as-is.
The CNTRLTYPE field is never than discovry controllers.  We
special case them in nvme_init_subsystem, which should move to
nvme_init_identify instead so that ctrl->cntrltype is always valid
after that.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 12:12           ` Christoph Hellwig
@ 2022-04-11 12:40             ` Chaitanya Kulkarni
  2022-04-11 14:14               ` Keith Busch
  0 siblings, 1 reply; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11 12:40 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Sagi Grimberg, linux-nvme, kbusch

On 4/11/22 05:12, Christoph Hellwig wrote:
> On Mon, Apr 11, 2022 at 12:09:31PM +0000, Chaitanya Kulkarni wrote:
>> -	ret = nvme_init_non_mdts_limits(ctrl);
>> -	if (ret < 0)
>> -		return ret;
>> +	if (ctrl->cntrltype == NVME_CTRL_IO) {
>> +		ret = nvme_init_non_mdts_limits(ctrl);
>> +		if (ret < 0)
>> +			return ret;
>> +	}
> 
> This looks better, but I don't think it will actually work as-is.
> The CNTRLTYPE field is never than discovry controllers.  We
> special case them in nvme_init_subsystem, which should move to
> nvme_init_identify instead so that ctrl->cntrltype is always valid
> after that.
> 

hmmm I think let's keep it simple with following tagset check than 
adding more code to make sure ctrl->cntrltype is valid:-


nvme (nvme-5.18) # cat 
0001-nvme-core-check-non-mdts-only-for-I-O-ctrl.patch
 From ee8439487a88b08e5abb6c6558ca6a902a0e6b1d Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <kch@nvidia.com>
Date: Fri, 8 Apr 2022 15:50:10 -0700
Subject: [PATCH] nvme-core: check non-mdts only for I/O ctrl

Validate ctrl is an I/O ctrl before checking the non mdts limits by
checking the ctrl->tagset member, as ctrl->tagset only allocated for
the I/O queues for the controller from :-

1. nvme_fc_create_io_queues(struct nvme_fc_ctrl *ctrl)
2. nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
3. nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
4. nvme_loop_create_io_queues(struct nvme_loop_ctrl *ctrl)
5. nvme_dev_add()

this also masks the following warning reported by the nvme_log_error()
when running blktest nvme/002: -

[ 2005.155946] run blktests nvme/002 at 2022-04-09 16:57:47
[ 2005.192223] loop: module loaded
[ 2005.196429] nvmet: adding nsid 1 to subsystem blktests-subsystem-0
[ 2005.200334] nvmet: adding nsid 1 to subsystem blktests-subsystem-1

<------------------------------SNIP---------------------------------->

[ 2008.958108] nvmet: adding nsid 1 to subsystem blktests-subsystem-997
[ 2008.962082] nvmet: adding nsid 1 to subsystem blktests-subsystem-998
[ 2008.966102] nvmet: adding nsid 1 to subsystem blktests-subsystem-999
[ 2008.973132] nvmet: creating discovery controller 1 for subsystem 
nqn.2014-08.org.nvmexpress.discovery for NQN testhostnqn.
*[ 2008.973196] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 
/ sc 0x2) MORE DNR*
[ 2008.974595] nvme nvme1: new ctrl: "nqn.2014-08.org.nvmexpress.discovery"
[ 2009.103248] nvme nvme1: Removing ctrl: NQN 
"nqn.2014-08.org.nvmexpress.discovery"

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
  drivers/nvme/host/core.c | 9 ++++++---
  1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e8d6a1e52083..2e5779d233f2 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3101,9 +3101,12 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
  	if (ret)
  		return ret;

-	ret = nvme_init_non_mdts_limits(ctrl);
-	if (ret < 0)
-		return ret;
+	/* only check non mdts for the I/O controller */
+	if (ctrl->tagset) {
+		ret = nvme_init_non_mdts_limits(ctrl);
+		if (ret < 0)
+			return ret;
+	}

  	ret = nvme_configure_apst(ctrl);
  	if (ret < 0)
-- 
2.29.0

-ck



^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 12:40             ` Chaitanya Kulkarni
@ 2022-04-11 14:14               ` Keith Busch
  2022-04-11 20:44                 ` Chaitanya Kulkarni
  0 siblings, 1 reply; 32+ messages in thread
From: Keith Busch @ 2022-04-11 14:14 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Christoph Hellwig, Sagi Grimberg, linux-nvme

On Mon, Apr 11, 2022 at 12:40:22PM +0000, Chaitanya Kulkarni wrote:
> @@ -3101,9 +3101,12 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
>   	if (ret)
>   		return ret;
> 
> -	ret = nvme_init_non_mdts_limits(ctrl);
> -	if (ret < 0)
> -		return ret;
> +	/* only check non mdts for the I/O controller */
> +	if (ctrl->tagset) {
> +		ret = nvme_init_non_mdts_limits(ctrl);
> +		if (ret < 0)
> +			return ret;
> +	}

The nvme pci driver doesn't initialize the ctrl->tagset until after
nvme_init_ctrl_finish(), so this won't work.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 14:14               ` Keith Busch
@ 2022-04-11 20:44                 ` Chaitanya Kulkarni
  2022-04-11 23:51                   ` Keith Busch
  2022-05-10  6:21                   ` Christoph Hellwig
  0 siblings, 2 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11 20:44 UTC (permalink / raw)
  To: Keith Busch; +Cc: Christoph Hellwig, Sagi Grimberg, linux-nvme

On 4/11/22 07:14, Keith Busch wrote:
> On Mon, Apr 11, 2022 at 12:40:22PM +0000, Chaitanya Kulkarni wrote:
>> @@ -3101,9 +3101,12 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
>>    	if (ret)
>>    		return ret;
>>
>> -	ret = nvme_init_non_mdts_limits(ctrl);
>> -	if (ret < 0)
>> -		return ret;
>> +	/* only check non mdts for the I/O controller */
>> +	if (ctrl->tagset) {
>> +		ret = nvme_init_non_mdts_limits(ctrl);
>> +		if (ret < 0)
>> +			return ret;
>> +	}
> 
> The nvme pci driver doesn't initialize the ctrl->tagset until after
> nvme_init_ctrl_finish(), so this won't work.

I think we should move nvme_init_non_mdts_limits() from
nvme_init_ctrl_finish() before ctrl actually allocate the I/O tagset ?

Since tagset allocation happens only once, it is guarded by the
new ctrl check in every transport and before each transport calls the
nvme_scan_work() from nvme_start_ctrl() path then proceed to the ns
allocation where these limits max_discard_{segments|sectors}
and max_write_zeroes_sectors actually used in :-

nvme_scan_work()
...
nvme_validate_or_alloc_ns()
  nvme_alloc_ns()
   nvme_update_ns_info()
    nvme_update_disk_info()

1. FC:-
nvme_fc_create_association()
  nvme_fc_create_io_queues()
   allocate ctrl->tagset. <-- call nvme_init_mdts_limits() before this.
  nvme_start_ctrl()
   nvme_scan_queue()
    nvme_scan_work()
     path to namespace allocation and accessing ctrl non mdts limits.

2. PCIe:-
nvme_reset_work()
  nvme_dev_add()
   allocate ctrl->tagset. <-- call nvme_init_mdts_limits() before this.
  nvme_start_ctrl()
   nvme_scan_queue()
    nvme_scan_work()
     path to namespace allocation and accessing ctrl non mdts limits.

3. RDMA :-
nvme_rdma_setup_ctrl
  nvme_rdma_configure_io_queues
   allocate ctrl->tagset.  <-- call nvme_init_mdts_limits() before this.
  nvme_start_ctrl()
   nvme_scan_queue()
    nvme_scan_work()
     path to namespace allocation and accessing ctrl non mdts limits.

4. TCP :-
nvme_tcp_setup_ctrl
  nvme_tcp_configure_io_queues
   allocate ctrl->tagset.  <-- call nvme_init_mdts_limits() before this.
  nvme_start_ctrl()
   nvme_scan_queue()
    nvme_scan_work()
     path to namespace allocation and accessing ctrl non mdts limits. 


-ck

Based on this something like :-

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e8d6a1e52083..3af2abd67e60 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2872,7 +2872,7 @@ static inline u32 nvme_mps_to_sectors(struct 
nvme_ctrl *ctrl, u32 units)
         return val;
  }

-static int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl)
+int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl)
  {
         struct nvme_command c = { };
         struct nvme_id_ctrl_nvm *id;
@@ -2924,6 +2924,7 @@ static int nvme_init_non_mdts_limits(struct 
nvme_ctrl *ctrl)
         kfree(id);
         return ret;
  }
+EXPORT_SYMBOL_GPL(nvme_init_non_mdts_limits);

  static int nvme_init_identify(struct nvme_ctrl *ctrl)
  {
@@ -3101,10 +3102,6 @@ int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl)
         if (ret)
                 return ret;

-       ret = nvme_init_non_mdts_limits(ctrl);
-       if (ret < 0)
-               return ret;
-
         ret = nvme_configure_apst(ctrl);
         if (ret < 0)
                 return ret;
diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
index 080f85f4105f..89a3d1d4f48f 100644
--- a/drivers/nvme/host/fc.c
+++ b/drivers/nvme/host/fc.c
@@ -2882,6 +2882,10 @@ nvme_fc_create_io_queues(struct nvme_fc_ctrl *ctrl)
         unsigned int nr_io_queues;
         int ret;

+       ret = nvme_init_non_mdts_limits(&ctrl->ctrl);
+       if (ret < 0)
+               return ret;
+
         nr_io_queues = min(min(opts->nr_io_queues, num_online_cpus()),
                                 ctrl->lport->ops->max_hw_queues);
         ret = nvme_set_queue_count(&ctrl->ctrl, &nr_io_queues);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index a1f8403ffd78..1a661226fbb0 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -777,6 +777,7 @@ long nvme_ns_head_chr_ioctl(struct file *file, 
unsigned int cmd,
  long nvme_dev_ioctl(struct file *file, unsigned int cmd,
                 unsigned long arg);
  int nvme_getgeo(struct block_device *bdev, struct hd_geometry *geo);
+int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl);

  extern const struct attribute_group *nvme_ns_id_attr_groups[];
  extern const struct pr_ops nvme_pr_ops;
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 3554c20e78c3..ce4060c3d62e 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2525,6 +2525,13 @@ static void nvme_dev_add(struct nvme_dev *dev)
         int ret;

         if (!dev->ctrl.tagset) {
+               ret = nvme_init_non_mdts_limits(&dev->ctrl);
+               if (ret < 0) {
+                       dev_warn(dev->ctrl.device,
+                               "reading non mdts limit error: %d\n", ret);
+                       return;
+               }
+
                 dev->tagset.ops = &nvme_mq_ops;
                 dev->tagset.nr_hw_queues = dev->online_queues - 1;
                 dev->tagset.nr_maps = 2; /* default + read */
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index c49b9c3c46f2..f6cf28a3b4c1 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -972,6 +972,10 @@ static int nvme_rdma_configure_io_queues(struct 
nvme_rdma_ctrl *ctrl, bool new)
                 return ret;

         if (new) {
+               ret = nvme_init_non_mdts_limits(&ctrl->ctrl);
+               if (ret < 0)
+                       goto out_free_io_queues;
+
                 ctrl->ctrl.tagset = nvme_rdma_alloc_tagset(&ctrl->ctrl, 
false);
                 if (IS_ERR(ctrl->ctrl.tagset)) {
                         ret = PTR_ERR(ctrl->ctrl.tagset);
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 10fc45d95b86..9147459580d4 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1858,6 +1858,10 @@ static int nvme_tcp_configure_io_queues(struct 
nvme_ctrl *ctrl, bool new)
                 return ret;

         if (new) {
+               ret = nvme_init_non_mdts_limits(ctrl);
+               if (ret < 0)
+                       goto out_free_io_queues;
+
                 ctrl->tagset = nvme_tcp_alloc_tagset(ctrl, false);
                 if (IS_ERR(ctrl->tagset)) {
                         ret = PTR_ERR(ctrl->tagset);
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index 59024af2da2e..2ac9017e966a 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -522,6 +522,10 @@ static int nvme_loop_create_io_queues(struct 
nvme_loop_ctrl *ctrl)
  {
         int ret;

+       ret = nvme_init_non_mdts_limits(&ctrl->ctrl);
+       if (ret < 0)
+               return ret;
+
         ret = nvme_loop_init_io_queues(ctrl);
         if (ret)
                 return ret;


-ck




^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 20:44                 ` Chaitanya Kulkarni
@ 2022-04-11 23:51                   ` Keith Busch
  2022-05-10  6:21                   ` Christoph Hellwig
  1 sibling, 0 replies; 32+ messages in thread
From: Keith Busch @ 2022-04-11 23:51 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Christoph Hellwig, Sagi Grimberg, linux-nvme

On Mon, Apr 11, 2022 at 08:44:59PM +0000, Chaitanya Kulkarni wrote:
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 3554c20e78c3..ce4060c3d62e 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2525,6 +2525,13 @@ static void nvme_dev_add(struct nvme_dev *dev)
>          int ret;
> 
>          if (!dev->ctrl.tagset) {
> +               ret = nvme_init_non_mdts_limits(&dev->ctrl);
> +               if (ret < 0) {
> +                       dev_warn(dev->ctrl.device,
> +                               "reading non mdts limit error: %d\n", ret);
> +                       return;
> +               }
> +
>                  dev->tagset.ops = &nvme_mq_ops;
>                  dev->tagset.nr_hw_queues = dev->online_queues - 1;
>                  dev->tagset.nr_maps = 2; /* default + read */

This isn't right either. A controller can change it's limits after a reset.
It's unlikely outside some odd firmware upgrades, but it could happen, so
the limits need to be checked on each reset, not just the first one.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
  2022-04-11  6:12   ` Christoph Hellwig
  2022-04-11 10:28   ` Sagi Grimberg
@ 2022-04-13 12:07   ` Yi Zhang
  2022-04-15  5:59     ` Chaitanya Kulkarni
  2022-04-13 16:48   ` Jonathan Derrick
                     ` (3 subsequent siblings)
  6 siblings, 1 reply; 32+ messages in thread
From: Yi Zhang @ 2022-04-13 12:07 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: open list:NVM EXPRESS DRIVER, Christoph Hellwig, Sagi Grimberg,
	Keith Busch

Verified the issue with this patch I reported here:
https://lore.kernel.org/linux-nvme/CAHj4cs_iC+FE8ZAXXZPeia1V3ZX7zRbeASdOP_8c7DLiFozNfA@mail.gmail.com/

Tested-by: Yi Zhang <yi.zhang@redhat.com>

On Mon, Apr 11, 2022 at 11:21 AM Chaitanya Kulkarni <kch@nvidia.com> wrote:
>
> Mark internal passthru requests quiet in the submission path with
> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
> path, if nvme request is resulted in the error and request is marked
> RQF_QUIET then don't log the error with nvme_error_log().
>
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>  drivers/nvme/host/core.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 449378a96a9f..12302be83a6c 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>  {
>         blk_status_t status = nvme_error_status(nvme_req(req)->status);
>
> -       if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +       if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>                 nvme_log_error(req);
>         nvme_end_req_zoned(req);
>         nvme_trace_bio_complete(req);
> @@ -1100,6 +1100,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
>                         goto out;
>         }
>
> +       req->rq_flags |= RQF_QUIET;
>         ret = nvme_execute_rq(req, at_head);
>         if (result && ret >= 0)
>                 *result = nvme_req(req)->result;
> --
> 2.29.0
>
>


-- 
Best Regards,
  Yi Zhang



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
                     ` (2 preceding siblings ...)
  2022-04-13 12:07   ` Yi Zhang
@ 2022-04-13 16:48   ` Jonathan Derrick
  2022-04-13 16:49   ` Jonathan Derrick
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 32+ messages in thread
From: Jonathan Derrick @ 2022-04-13 16:48 UTC (permalink / raw)
  To: linux-nvme

Hi

On 4/10/2022 9:12 PM, Chaitanya Kulkarni wrote:
> Mark internal passthru requests quiet in the submission path with
> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
> path, if nvme request is resulted in the error and request is marked
> RQF_QUIET then don't log the error with nvme_error_log().
> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>   drivers/nvme/host/core.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 449378a96a9f..12302be83a6c 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>   {
>   	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>   
> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +	if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>   		nvme_log_error(req);
>   	nvme_end_req_zoned(req);
>   	nvme_trace_bio_complete(req);
> @@ -1100,6 +1100,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
>   			goto out;
>   	}
>   
> +	req->rq_flags |= RQF_QUIET;
Any reason to not make it a part of flags in the argument list and let 
the block layer build the req with it?

eg:
flags |= REQ_QUIET;
if (qid == NVME_QID_ANY)
	req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
else
	req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
					qid ? qid - 1 : 0);

>   	ret = nvme_execute_rq(req, at_head);
>   	if (result && ret >= 0)
>   		*result = nvme_req(req)->result;


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
                     ` (3 preceding siblings ...)
  2022-04-13 16:48   ` Jonathan Derrick
@ 2022-04-13 16:49   ` Jonathan Derrick
  2022-04-13 16:52     ` Christoph Hellwig
  2022-04-13 16:58   ` Keith Busch
  2022-04-13 18:42   ` Alan Adamson
  6 siblings, 1 reply; 32+ messages in thread
From: Jonathan Derrick @ 2022-04-13 16:49 UTC (permalink / raw)
  To: Chaitanya Kulkarni, linux-nvme; +Cc: hch, sagi, kbusch

Hi,

On 4/10/2022 9:12 PM, Chaitanya Kulkarni wrote:
> Mark internal passthru requests quiet in the submission path with
> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
> path, if nvme request is resulted in the error and request is marked
> RQF_QUIET then don't log the error with nvme_error_log().
> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>   drivers/nvme/host/core.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 449378a96a9f..12302be83a6c 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>   {
>   	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>   
> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +	if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>   		nvme_log_error(req);
>   	nvme_end_req_zoned(req);
>   	nvme_trace_bio_complete(req);
> @@ -1100,6 +1100,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
>   			goto out;
>   	}
>   
> +	req->rq_flags |= RQF_QUIET;
Any reason to not make it a part of flags in the argument list and let 
the block layer build the req with it?

eg:
flags |= REQ_QUIET;
if (qid == NVME_QID_ANY)
     req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
else
     req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
                     qid ? qid - 1 : 0);

>   	ret = nvme_execute_rq(req, at_head);
>   	if (result && ret >= 0)
>   		*result = nvme_req(req)->result;


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-13 16:49   ` Jonathan Derrick
@ 2022-04-13 16:52     ` Christoph Hellwig
  2022-04-13 16:57       ` Jonathan Derrick
  0 siblings, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2022-04-13 16:52 UTC (permalink / raw)
  To: Jonathan Derrick; +Cc: Chaitanya Kulkarni, linux-nvme, hch, sagi, kbusch

On Wed, Apr 13, 2022 at 10:49:33AM -0600, Jonathan Derrick wrote:
>>   +	req->rq_flags |= RQF_QUIET;
> Any reason to not make it a part of flags in the argument list and let the 
> block layer build the req with it?
>
> eg:
> flags |= REQ_QUIET;

Because there is not REQ_QUIET, just RQF_QUIET and BIO_QUIET, and their
relation isn't quite as straightforward as it seems.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-13 16:52     ` Christoph Hellwig
@ 2022-04-13 16:57       ` Jonathan Derrick
  2022-04-13 17:09         ` Keith Busch
  0 siblings, 1 reply; 32+ messages in thread
From: Jonathan Derrick @ 2022-04-13 16:57 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Chaitanya Kulkarni, linux-nvme, sagi, kbusch



On 4/13/2022 10:52 AM, Christoph Hellwig wrote:
> On Wed, Apr 13, 2022 at 10:49:33AM -0600, Jonathan Derrick wrote:
>>>    +	req->rq_flags |= RQF_QUIET;
>> Any reason to not make it a part of flags in the argument list and let the
>> block layer build the req with it?
>>
>> eg:
>> flags |= REQ_QUIET;
> 
> Because there is not REQ_QUIET, just RQF_QUIET and BIO_QUIET, and their
> relation isn't quite as straightforward as it seems.

Well let's suppose I had written RQF_QUIET instead of the non-existent 
REQ_QUIET


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
                     ` (4 preceding siblings ...)
  2022-04-13 16:49   ` Jonathan Derrick
@ 2022-04-13 16:58   ` Keith Busch
  2022-04-13 18:42   ` Alan Adamson
  6 siblings, 0 replies; 32+ messages in thread
From: Keith Busch @ 2022-04-13 16:58 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi

On Sun, Apr 10, 2022 at 08:12:49PM -0700, Chaitanya Kulkarni wrote:
> Mark internal passthru requests quiet in the submission path with
> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
> path, if nvme request is resulted in the error and request is marked
> RQF_QUIET then don't log the error with nvme_error_log().

The subject should say RQF_QUIET instead of REQ_, but otherwise this looks good
to me.

Reviewed-by: Keith Busch <kbusch@kernel.org>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-13 16:57       ` Jonathan Derrick
@ 2022-04-13 17:09         ` Keith Busch
  2022-04-13 17:11           ` Jonathan Derrick
  2022-04-15  6:01           ` Chaitanya Kulkarni
  0 siblings, 2 replies; 32+ messages in thread
From: Keith Busch @ 2022-04-13 17:09 UTC (permalink / raw)
  To: Jonathan Derrick; +Cc: Christoph Hellwig, Chaitanya Kulkarni, linux-nvme, sagi

On Wed, Apr 13, 2022 at 10:57:19AM -0600, Jonathan Derrick wrote:
> 
> 
> On 4/13/2022 10:52 AM, Christoph Hellwig wrote:
> > On Wed, Apr 13, 2022 at 10:49:33AM -0600, Jonathan Derrick wrote:
> > > >    +	req->rq_flags |= RQF_QUIET;
> > > Any reason to not make it a part of flags in the argument list and let the
> > > block layer build the req with it?
> > > 
> > > eg:
> > > flags |= REQ_QUIET;
> > 
> > Because there is not REQ_QUIET, just RQF_QUIET and BIO_QUIET, and their
> > relation isn't quite as straightforward as it seems.
> 
> Well let's suppose I had written RQF_QUIET instead of the non-existent
> REQ_QUIET

The "flags" value passed to blk_mq_alloc_request() is not the same as the
values for rq_flags, though.

I guess you could add a new allocation flag, maybe called something like
BLK_MQ_REQ_QUIET to tell the block layer to set RQF_QUIET. That is similiar to
how BLK_MQ_REQ_PM is used for RQF_PM, but I'm not sure we want to add another
'if' check in the fast path.


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-13 17:09         ` Keith Busch
@ 2022-04-13 17:11           ` Jonathan Derrick
  2022-04-15  6:01           ` Chaitanya Kulkarni
  1 sibling, 0 replies; 32+ messages in thread
From: Jonathan Derrick @ 2022-04-13 17:11 UTC (permalink / raw)
  To: Keith Busch; +Cc: Christoph Hellwig, Chaitanya Kulkarni, linux-nvme, sagi



On 4/13/2022 11:09 AM, Keith Busch wrote:
> On Wed, Apr 13, 2022 at 10:57:19AM -0600, Jonathan Derrick wrote:
>>
>>
>> On 4/13/2022 10:52 AM, Christoph Hellwig wrote:
>>> On Wed, Apr 13, 2022 at 10:49:33AM -0600, Jonathan Derrick wrote:
>>>>>     +	req->rq_flags |= RQF_QUIET;
>>>> Any reason to not make it a part of flags in the argument list and let the
>>>> block layer build the req with it?
>>>>
>>>> eg:
>>>> flags |= REQ_QUIET;
>>>
>>> Because there is not REQ_QUIET, just RQF_QUIET and BIO_QUIET, and their
>>> relation isn't quite as straightforward as it seems.
>>
>> Well let's suppose I had written RQF_QUIET instead of the non-existent
>> REQ_QUIET
> 
> The "flags" value passed to blk_mq_alloc_request() is not the same as the
> values for rq_flags, though.
> 
> I guess you could add a new allocation flag, maybe called something like
> BLK_MQ_REQ_QUIET to tell the block layer to set RQF_QUIET. That is similiar to
> how BLK_MQ_REQ_PM is used for RQF_PM, but I'm not sure we want to add another
> 'if' check in the fast path.

Fair enough


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
                     ` (5 preceding siblings ...)
  2022-04-13 16:58   ` Keith Busch
@ 2022-04-13 18:42   ` Alan Adamson
  6 siblings, 0 replies; 32+ messages in thread
From: Alan Adamson @ 2022-04-13 18:42 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi, kbusch


> On Apr 10, 2022, at 8:12 PM, Chaitanya Kulkarni <kch@nvidia.com> wrote:
> 
> Mark internal passthru requests quiet in the submission path with
> RQF_QUIET flag added in the __nvme_submit_sync_cmd(). In the completion
> path, if nvme request is resulted in the error and request is marked
> RQF_QUIET then don't log the error with nvme_error_log().

Reviewed-by: Alan Adamson <alan.adamson@oracle.com>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-13 12:07   ` Yi Zhang
@ 2022-04-15  5:59     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-15  5:59 UTC (permalink / raw)
  To: Yi Zhang, Chaitanya Kulkarni
  Cc: open list:NVM EXPRESS DRIVER, Christoph Hellwig, Sagi Grimberg,
	Keith Busch

On 4/13/22 05:07, Yi Zhang wrote:
> Verified the issue with this patch I reported here:
> https://lore.kernel.org/linux-nvme/CAHj4cs_iC+FE8ZAXXZPeia1V3ZX7zRbeASdOP_8c7DLiFozNfA@mail.gmail.com/
> 
> Tested-by: Yi Zhang <yi.zhang@redhat.com>
> 

Thanks a lot for testing it.

-ck




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET
  2022-04-13 17:09         ` Keith Busch
  2022-04-13 17:11           ` Jonathan Derrick
@ 2022-04-15  6:01           ` Chaitanya Kulkarni
  1 sibling, 0 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-15  6:01 UTC (permalink / raw)
  To: Keith Busch, Jonathan Derrick
  Cc: Christoph Hellwig, Chaitanya Kulkarni, linux-nvme, sagi


>> Well let's suppose I had written RQF_QUIET instead of the non-existent
>> REQ_QUIET
> 
> The "flags" value passed to blk_mq_alloc_request() is not the same as the
> values for rq_flags, though.
> 
> I guess you could add a new allocation flag, maybe called something like
> BLK_MQ_REQ_QUIET to tell the block layer to set RQF_QUIET. That is similiar to
> how BLK_MQ_REQ_PM is used for RQF_PM, but I'm not sure we want to add another
> 'if' check in the fast path.

unless it is absolutely necessary we should avoid that.

-ck


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-04-11 20:44                 ` Chaitanya Kulkarni
  2022-04-11 23:51                   ` Keith Busch
@ 2022-05-10  6:21                   ` Christoph Hellwig
  2022-05-11  7:19                     ` Chaitanya Kulkarni
  1 sibling, 1 reply; 32+ messages in thread
From: Christoph Hellwig @ 2022-05-10  6:21 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: Keith Busch, Christoph Hellwig, Sagi Grimberg, linux-nvme

Can you pick up where we stopped and look into a fix here?


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/3] nvmet: handle admin default command set identifier
  2022-04-11  6:18   ` Christoph Hellwig
@ 2022-05-10  6:33     ` Christoph Hellwig
  0 siblings, 0 replies; 32+ messages in thread
From: Christoph Hellwig @ 2022-05-10  6:33 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi, kbusch

Any chance you could respin this to actually report the correct
Identify data structure?

On Mon, Apr 11, 2022 at 08:18:54AM +0200, Christoph Hellwig wrote:
> On Sun, Apr 10, 2022 at 08:12:47PM -0700, Chaitanya Kulkarni wrote:
> > -static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
> > +static void nvmet_execute_identify_cns_cs_ctrl_nvm(struct nvmet_req *req)
> 
> No, this does not work.
> 
> nvmet_execute_identify_ctrl handles the
> "Identify Controller data structure" from the base spec, CNS 01h, no CSI.
> 
> The "I/O Command Set Specific Identify Controller data structure" from
> the NVM Command set spec (CNS 06h, CSI 00h) is an entirely different
> data structure and needs separate handling.
> 
> I'd also drop the cns here and in the other names, every identify command
> has a CNS value associated with it and these names are already getting
> rather long.
> 
> > +			if (IS_ENABLED(CONFIG_BLK_DEV_ZONED)) {
> > +				nvmet_execute_identify_cns_cs_ctrl_zns(req);
> > +				return;
> >  			}
> > +			break;
> 
> Nit: I'd invert the IS_ENABLED check.  This saves one line, and one
> level of indentation and just reads easier as well.
---end quoted text---


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl
  2022-05-10  6:21                   ` Christoph Hellwig
@ 2022-05-11  7:19                     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 32+ messages in thread
From: Chaitanya Kulkarni @ 2022-05-11  7:19 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Keith Busch, Sagi Grimberg, linux-nvme

On 5/9/2022 11:21 PM, Christoph Hellwig wrote:
> Can you pick up where we stopped and look into a fix here?
> 

Yes give me couple of days please, I'm out.

-ck

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2022-05-11  7:33 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-11  3:12 [PATCH 0/3] nvme: fix internal passthru error messages Chaitanya Kulkarni
2022-04-11  3:12 ` [PATCH 1/3] nvmet: handle admin default command set identifier Chaitanya Kulkarni
2022-04-11  6:18   ` Christoph Hellwig
2022-05-10  6:33     ` Christoph Hellwig
2022-04-11  3:12 ` [PATCH 2/3] nvme-core: don't check non-mdts for disc ctrl Chaitanya Kulkarni
2022-04-11  6:13   ` Christoph Hellwig
2022-04-11 10:27     ` Sagi Grimberg
2022-04-11 10:49       ` Chaitanya Kulkarni
2022-04-11 12:09         ` Chaitanya Kulkarni
2022-04-11 12:12           ` Christoph Hellwig
2022-04-11 12:40             ` Chaitanya Kulkarni
2022-04-11 14:14               ` Keith Busch
2022-04-11 20:44                 ` Chaitanya Kulkarni
2022-04-11 23:51                   ` Keith Busch
2022-05-10  6:21                   ` Christoph Hellwig
2022-05-11  7:19                     ` Chaitanya Kulkarni
2022-04-11  3:12 ` [PATCH 3/3] nvme-core: mark internal passthru req REQ_QUIET Chaitanya Kulkarni
2022-04-11  6:12   ` Christoph Hellwig
2022-04-11 10:48     ` Chaitanya Kulkarni
2022-04-11 10:28   ` Sagi Grimberg
2022-04-11 10:49     ` Chaitanya Kulkarni
2022-04-13 12:07   ` Yi Zhang
2022-04-15  5:59     ` Chaitanya Kulkarni
2022-04-13 16:48   ` Jonathan Derrick
2022-04-13 16:49   ` Jonathan Derrick
2022-04-13 16:52     ` Christoph Hellwig
2022-04-13 16:57       ` Jonathan Derrick
2022-04-13 17:09         ` Keith Busch
2022-04-13 17:11           ` Jonathan Derrick
2022-04-15  6:01           ` Chaitanya Kulkarni
2022-04-13 16:58   ` Keith Busch
2022-04-13 18:42   ` Alan Adamson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.