* [PATCH 0/3] nvme-fabrics: auth fixes and cleanup
@ 2023-05-22 13:04 Chaitanya Kulkarni
2023-05-22 13:04 ` [PATCH 1/3] nvme-fabrics: fix qid in error message Chaitanya Kulkarni
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2023-05-22 13:04 UTC (permalink / raw)
To: hare; +Cc: linux-nvme, hch, sagi, Chaitanya Kulkarni
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="y", Size: 15680 bytes --]
Hi,
It has fix for the error message for I/O connect queue, missing error
out for the I/O connect command failure prior to post connect auth
processing, and helper for post connect auth processing.
Below are the blktets nvme results on nvme-loop and nvme-tcp transport.
-ck
Chaitanya Kulkarni (3):
nvme-fabrics: fix qid in error message
nvme-fabrics: error out when connect I/O fails
nvme-fabrics: factor out common for auth
drivers/nvme/host/fabrics.c | 81 +++++++++++++++++--------------------
1 file changed, 36 insertions(+), 45 deletions(-)
nvme (nvme-6.5) # git log -3
commit aa78c389c7d7bdd5263407202628bcd642ac16b8 (HEAD -> nvme-6.5)
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date: Mon May 22 05:29:27 2023 -0700
nvme-fabrics: factor out common for auth
nvmf_connect_admin_queue and nvmf_connect_io_queue() shares common code
for post connect command authentication processing that includes,
returning appropriate NVMe authentication status based on the
command result, authentication negotiation per qid, waiting on
negotiation per qid.
Add a common helper function to reduce the code duplication with
necessary aruments.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
commit ec195cef9aa612c2cef93a504e74b53114d2f1c9
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date: Mon May 22 04:17:21 2023 -0700
nvme-fabrics: error out when connect I/O fails
In nvmf_connect_io_queue() when connect I/O commands fails we just
log the connect error and continue processing for authentication.
Add goto out_free_data after logging the connect error to error out.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
commit c7ba03fdaaffaeb0d520ceb668a85ab5712bdbad
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date: Mon May 22 03:59:52 2023 -0700
nvme-fabrics: fix qid in error message
When secure concatenation is not implemented, instead of statically
printing the qid=0 in warning message print the qid received from caller
as for I/O queues qid can be non zero.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
nvme (nvme-6.5) #
+ umount /mnt/nvme0n1
+ clear_dmesg
./compile_nvme.sh: line 3: clear_dmesg: command not found
umount: /mnt/nvme0n1: no mount point specified.
+ ./delete.sh
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 0 controller(s)
real 0m0.009s
user 0m0.000s
sys 0m0.009s
+ rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
+ rmdir /sys/kernel/config/nvmet/ports/1
rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file or directory
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
./delete.sh: line 14: /sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such file or directory
+ rmdir 'config/nullb/nullb*'
rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config
└── pci_ep
├── controllers
└── functions
3 directories, 0 files
+ modprobe -r nvme-fabrics
+ modprobe -r nvme_loop
+ modprobe -r nvmet
+ modprobe -r nvme
+ sleep 1
+ modprobe -r nvme-core
+ lsmod
+ grep nvme
+ sleep 1
+ git diff
++ nproc
+ make -j 48 M=drivers/nvme/host/ clean
++ nproc
+ make -j 48 M=drivers/nvme/ modules
CC [M] drivers/nvme/host/core.o
CC [M] drivers/nvme/host/ioctl.o
CC [M] drivers/nvme/host/sysfs.o
CC [M] drivers/nvme/host/constants.o
CC [M] drivers/nvme/host/trace.o
CC [M] drivers/nvme/host/multipath.o
CC [M] drivers/nvme/host/zns.o
CC [M] drivers/nvme/host/fault_inject.o
CC [M] drivers/nvme/host/hwmon.o
CC [M] drivers/nvme/host/auth.o
CC [M] drivers/nvme/host/pci.o
CC [M] drivers/nvme/host/fabrics.o
CC [M] drivers/nvme/host/rdma.o
CC [M] drivers/nvme/host/fc.o
CC [M] drivers/nvme/host/tcp.o
LD [M] drivers/nvme/host/nvme-fabrics.o
LD [M] drivers/nvme/host/nvme.o
LD [M] drivers/nvme/host/nvme-rdma.o
LD [M] drivers/nvme/host/nvme-fc.o
LD [M] drivers/nvme/host/nvme-tcp.o
LD [M] drivers/nvme/host/nvme-core.o
MODPOST drivers/nvme/Module.symvers
CC [M] drivers/nvme/host/nvme-core.mod.o
CC [M] drivers/nvme/host/nvme.mod.o
CC [M] drivers/nvme/host/nvme-fabrics.mod.o
CC [M] drivers/nvme/host/nvme-rdma.mod.o
CC [M] drivers/nvme/host/nvme-fc.mod.o
CC [M] drivers/nvme/host/nvme-tcp.mod.o
LD [M] drivers/nvme/host/nvme-core.ko
LD [M] drivers/nvme/host/nvme-fabrics.ko
LD [M] drivers/nvme/host/nvme-fc.ko
LD [M] drivers/nvme/host/nvme.ko
LD [M] drivers/nvme/host/nvme-rdma.ko
LD [M] drivers/nvme/host/nvme-tcp.ko
+ HOST=drivers/nvme/host
+ TARGET=drivers/nvme/target
++ uname -r
+ HOST_DEST=/lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/host/
++ uname -r
+ TARGET_DEST=/lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/target/
+ cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko /lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/host//
+ cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko /lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/target//
+ ls -lrth /lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/host/ /lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/target//
/lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/host/:
total 7.8M
-rw-r--r--. 1 root root 3.8M May 22 05:42 nvme-core.ko
-rw-r--r--. 1 root root 495K May 22 05:42 nvme-fabrics.ko
-rw-r--r--. 1 root root 981K May 22 05:42 nvme-fc.ko
-rw-r--r--. 1 root root 785K May 22 05:42 nvme.ko
-rw-r--r--. 1 root root 929K May 22 05:42 nvme-rdma.ko
-rw-r--r--. 1 root root 906K May 22 05:42 nvme-tcp.ko
/lib/modules/6.4.0-rc2nvme+/kernel/drivers/nvme/target//:
total 7.4M
-rw-r--r--. 1 root root 537K May 22 05:42 nvme-fcloop.ko
-rw-r--r--. 1 root root 476K May 22 05:42 nvme-loop.ko
-rw-r--r--. 1 root root 805K May 22 05:42 nvmet-fc.ko
-rw-r--r--. 1 root root 4.0M May 22 05:42 nvmet.ko
-rw-r--r--. 1 root root 899K May 22 05:42 nvmet-rdma.ko
-rw-r--r--. 1 root root 761K May 22 05:42 nvmet-tcp.ko
+ sync
+ sync
+ sync
+ modprobe nvme
+ echo 'Press enter to continue ...'
Press enter to continue ...
################nvme_trtype=loop############
nvme/002 (create many subsystems and test discovery) [passed]
runtime 41.534s ... 41.455s
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
runtime 10.166s ... 10.160s
nvme/004 (test nvme and nvmet UUID NS descriptors) [passed]
runtime 1.626s ... 1.696s
nvme/005 (reset local loopback target) [passed]
runtime 2.010s ... 2.010s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
runtime 0.125s ... 0.127s
nvme/007 (create an NVMeOF target with a file-backed ns) [passed]
runtime 0.084s ... 0.081s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
runtime 1.668s ... 1.679s
nvme/009 (create an NVMeOF host with a file-backed ns) [passed]
runtime 1.629s ... 1.678s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
runtime 82.687s ... 83.190s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
runtime 71.740s ... 81.613s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
runtime 28.046s ... 77.525s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
runtime 5.312s ... 69.935s
nvme/014 (flush a NVMeOF block device-backed ns) [passed]
runtime 6.074s ... 5.889s
nvme/015 (unit test for NVMe flush for file backed ns) [passed]
runtime 4.560s ... 4.558s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [passed]
runtime 19.639s ... 19.980s
nvme/017 (create/delete many file-ns and test discovery) [passed]
runtime ... 19.621s
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
runtime 1.324s ... 1.626s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
runtime 1.366s ... 1.682s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
runtime 1.301s ... 1.631s
nvme/021 (test NVMe list command on NVMeOF file-backed ns) [passed]
runtime 1.325s ... 1.651s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns) [passed]
runtime 1.382s ... 2.022s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
runtime 1.357s ... 1.668s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
runtime 1.299s ... 1.616s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
runtime 1.314s ... 1.623s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
runtime 1.310s ... 1.650s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
runtime 1.320s ... 1.631s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
runtime 1.325s ... 1.615s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
runtime 1.673s ... 1.936s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
runtime 0.327s ... 0.418s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
runtime 1.673s ... 4.829s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
runtime 0.050s ... 0.041s
nvme/040 (test nvme fabrics controller reset/disconnect operation during I/O) [passed]
runtime 7.480s ... 8.171s
nvme/041 (Create authenticated connections) [passed]
runtime 1.246s ... 1.517s
nvme/042 (Test dhchap key types for authenticated connections) [passed]
runtime 7.842s ... 9.730s
nvme/043 (Test hash and DH group variations for authenticated connections) [passed]
runtime 1.558s ... 7.954s
nvme/044 (Test bi-directional authentication) [passed]
runtime 1.578s ... 2.140s
nvme/045 (Test re-authentication) [passed]
runtime 5.866s ... 4.872s
nvme/047 (test different queue types for fabric transports) [not run]
runtime 2.372s ...
nvme_trtype=loop is not supported in this test
nvme/048 (Test queue count changes on reconnect) [not run]
runtime 5.651s ...
nvme_trtype=loop is not supported in this test
################nvme_trtype=tcp############
nvme/002 (create many subsystems and test discovery) [not run]
runtime 41.455s ...
nvme_trtype=tcp is not supported in this test
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
runtime 10.160s ... 10.183s
nvme/004 (test nvme and nvmet UUID NS descriptors) [passed]
runtime 1.696s ... 1.323s
nvme/005 (reset local loopback target) [passed]
runtime 2.010s ... 1.411s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
runtime 0.127s ... 0.124s
nvme/007 (create an NVMeOF target with a file-backed ns) [passed]
runtime 0.081s ... 0.089s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
runtime 1.679s ... 1.325s
nvme/009 (create an NVMeOF host with a file-backed ns) [passed]
runtime 1.678s ... 1.299s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
runtime 83.190s ... 83.417s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
runtime 81.613s ... 87.657s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
runtime 77.525s ... 74.713s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
runtime 69.935s ... 86.551s
nvme/014 (flush a NVMeOF block device-backed ns) [passed]
runtime 5.889s ... 5.366s
nvme/015 (unit test for NVMe flush for file backed ns) [passed]
runtime 4.558s ... 3.875s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [not run]
runtime 19.980s ...
nvme_trtype=tcp is not supported in this test
nvme/017 (create/delete many file-ns and test discovery) [not run]
runtime 19.621s ...
nvme_trtype=tcp is not supported in this test
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
runtime 1.626s ... 1.308s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
runtime 1.682s ... 1.339s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
runtime 1.631s ... 1.292s
nvme/021 (test NVMe list command on NVMeOF file-backed ns) [passed]
runtime 1.651s ... 1.297s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns) [passed]
runtime 2.022s ... 1.367s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
runtime 1.668s ... 1.348s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
runtime 1.616s ... 1.308s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
runtime 1.623s ... 1.294s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
runtime 1.650s ... 1.300s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
runtime 1.631s ... 1.297s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
runtime 1.615s ... 1.285s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
runtime 1.936s ... 1.607s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
runtime 0.418s ... 0.289s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
runtime 4.829s ... 1.513s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
runtime 0.041s ... 0.050s
nvme/040 (test nvme fabrics controller reset/disconnect operation during I/O) [passed]
runtime 8.171s ... 7.430s
nvme/041 (Create authenticated connections) [passed]
runtime 1.517s ... 1.174s
nvme/042 (Test dhchap key types for authenticated connections) [passed]
runtime 9.730s ... 7.371s
nvme/043 (Test hash and DH group variations for authenticated connections) [passed]
runtime 7.954s ... 1.358s
nvme/044 (Test bi-directional authentication) [passed]
runtime 2.140s ... 1.521s
nvme/045 (Test re-authentication) [passed]
runtime 4.872s ... 5.712s
nvme/047 (test different queue types for fabric transports) [passed]
runtime ... 2.286s
nvme/048 (Test queue count changes on reconnect) [passed]
runtime ... 5.600s
--
2.40.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] nvme-fabrics: fix qid in error message
2023-05-22 13:04 [PATCH 0/3] nvme-fabrics: auth fixes and cleanup Chaitanya Kulkarni
@ 2023-05-22 13:04 ` Chaitanya Kulkarni
2023-05-23 6:58 ` Christoph Hellwig
2023-05-22 13:04 ` [PATCH 2/3] nvme-fabrics: error out when connect I/O fails Chaitanya Kulkarni
2023-05-22 13:04 ` [PATCH 3/3] nvme-fabrics: factor out common for auth Chaitanya Kulkarni
2 siblings, 1 reply; 8+ messages in thread
From: Chaitanya Kulkarni @ 2023-05-22 13:04 UTC (permalink / raw)
To: hare; +Cc: linux-nvme, hch, sagi, Chaitanya Kulkarni
When secure concatenation is not implemented, instead of statically
printing the qid=0 in warning message print the qid received from caller
as for I/O queues qid can be non zero.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
drivers/nvme/host/fabrics.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index b1fa27b60917..529a86aea5f5 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -554,7 +554,8 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
/* Secure concatenation is not implemented */
if (result & NVME_CONNECT_AUTHREQ_ASCR) {
dev_warn(ctrl->device,
- "qid 0: secure concatenation is not supported\n");
+ "qid %d: secure concatenation is not supported\n",
+ qid);
ret = NVME_SC_AUTH_REQUIRED;
goto out_free_data;
}
--
2.40.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/3] nvme-fabrics: error out when connect I/O fails
2023-05-22 13:04 [PATCH 0/3] nvme-fabrics: auth fixes and cleanup Chaitanya Kulkarni
2023-05-22 13:04 ` [PATCH 1/3] nvme-fabrics: fix qid in error message Chaitanya Kulkarni
@ 2023-05-22 13:04 ` Chaitanya Kulkarni
2023-05-23 6:58 ` Christoph Hellwig
2023-05-22 13:04 ` [PATCH 3/3] nvme-fabrics: factor out common for auth Chaitanya Kulkarni
2 siblings, 1 reply; 8+ messages in thread
From: Chaitanya Kulkarni @ 2023-05-22 13:04 UTC (permalink / raw)
To: hare; +Cc: linux-nvme, hch, sagi, Chaitanya Kulkarni
In nvmf_connect_io_queue() when connect I/O commands fails we just
log the connect error and continue processing for authentication.
Add goto out_free_data after logging the connect error to error out.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
drivers/nvme/host/fabrics.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 529a86aea5f5..9d3df63eb49a 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -548,6 +548,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
if (ret) {
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
+ goto out_free_data;
}
result = le32_to_cpu(res.u32);
if (result & (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)) {
--
2.40.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/3] nvme-fabrics: factor out common for auth
2023-05-22 13:04 [PATCH 0/3] nvme-fabrics: auth fixes and cleanup Chaitanya Kulkarni
2023-05-22 13:04 ` [PATCH 1/3] nvme-fabrics: fix qid in error message Chaitanya Kulkarni
2023-05-22 13:04 ` [PATCH 2/3] nvme-fabrics: error out when connect I/O fails Chaitanya Kulkarni
@ 2023-05-22 13:04 ` Chaitanya Kulkarni
2023-05-23 7:00 ` Christoph Hellwig
2 siblings, 1 reply; 8+ messages in thread
From: Chaitanya Kulkarni @ 2023-05-22 13:04 UTC (permalink / raw)
To: hare; +Cc: linux-nvme, hch, sagi, Chaitanya Kulkarni
nvmf_connect_admin_queue and nvmf_connect_io_queue() shares common code
for post connect command authentication processing that includes,
returning appropriate NVMe authentication status based on the
command result, authentication negotiation per qid, waiting on
negotiation per qid.
Add a common helper function to reduce the code duplication with
necessary aruments of qid and result of connect command.
No functional changes in this patch.
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
drivers/nvme/host/fabrics.c | 81 ++++++++++++++++---------------------
1 file changed, 35 insertions(+), 46 deletions(-)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 9d3df63eb49a..1d48c0b77cd0 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -433,6 +433,39 @@ static void nvmf_connect_cmd_prep(struct nvme_ctrl *ctrl, u16 qid,
cmd->connect.cattr |= NVME_CONNECT_DISABLE_SQFLOW;
}
+static u16 nvmf_auth_post_queue_connect(struct nvme_ctrl *ctrl, u16 qid,
+ u32 result)
+{
+ int ret;
+
+ if (!(result & (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)))
+ return NVME_SC_SUCCESS;
+
+ /* Secure concatenation is not implemented */
+ if (result & NVME_CONNECT_AUTHREQ_ASCR) {
+ dev_warn(ctrl->device,
+ "qid %d: secure concatenation is not supported\n",
+ qid);
+ return NVME_SC_AUTH_REQUIRED;
+ }
+ /* Authentication required */
+ ret = nvme_auth_negotiate(ctrl, qid);
+ if (ret) {
+ dev_warn(ctrl->device,
+ "qid %d: authentication setup failed\n", qid);
+ return NVME_SC_AUTH_REQUIRED;
+ }
+ ret = nvme_auth_wait(ctrl, qid);
+ if (ret) {
+ dev_warn(ctrl->device, "qid %u: authentication failed\n", qid);
+ return ret;
+ }
+ if (!ret && qid == 0)
+ dev_info(ctrl->device, "qid 0: authenticated\n");
+
+ return NVME_SC_SUCCESS;
+}
+
/**
* nvmf_connect_admin_queue() - NVMe Fabrics Admin Queue "Connect"
* API function.
@@ -478,30 +511,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
result = le32_to_cpu(res.u32);
ctrl->cntlid = result & 0xFFFF;
- if (result & (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)) {
- /* Secure concatenation is not implemented */
- if (result & NVME_CONNECT_AUTHREQ_ASCR) {
- dev_warn(ctrl->device,
- "qid 0: secure concatenation is not supported\n");
- ret = NVME_SC_AUTH_REQUIRED;
- goto out_free_data;
- }
- /* Authentication required */
- ret = nvme_auth_negotiate(ctrl, 0);
- if (ret) {
- dev_warn(ctrl->device,
- "qid 0: authentication setup failed\n");
- ret = NVME_SC_AUTH_REQUIRED;
- goto out_free_data;
- }
- ret = nvme_auth_wait(ctrl, 0);
- if (ret)
- dev_warn(ctrl->device,
- "qid 0: authentication failed\n");
- else
- dev_info(ctrl->device,
- "qid 0: authenticated\n");
- }
+ ret = nvmf_auth_post_queue_connect(ctrl, 0, result);
out_free_data:
kfree(data);
return ret;
@@ -551,28 +561,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
goto out_free_data;
}
result = le32_to_cpu(res.u32);
- if (result & (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)) {
- /* Secure concatenation is not implemented */
- if (result & NVME_CONNECT_AUTHREQ_ASCR) {
- dev_warn(ctrl->device,
- "qid %d: secure concatenation is not supported\n",
- qid);
- ret = NVME_SC_AUTH_REQUIRED;
- goto out_free_data;
- }
- /* Authentication required */
- ret = nvme_auth_negotiate(ctrl, qid);
- if (ret) {
- dev_warn(ctrl->device,
- "qid %d: authentication setup failed\n", qid);
- ret = NVME_SC_AUTH_REQUIRED;
- } else {
- ret = nvme_auth_wait(ctrl, qid);
- if (ret)
- dev_warn(ctrl->device,
- "qid %u: authentication failed\n", qid);
- }
- }
+ ret = nvmf_auth_post_queue_connect(ctrl, qid, result);
out_free_data:
kfree(data);
return ret;
--
2.40.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 1/3] nvme-fabrics: fix qid in error message
2023-05-22 13:04 ` [PATCH 1/3] nvme-fabrics: fix qid in error message Chaitanya Kulkarni
@ 2023-05-23 6:58 ` Christoph Hellwig
0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2023-05-23 6:58 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: hare, linux-nvme, hch, sagi
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] nvme-fabrics: error out when connect I/O fails
2023-05-22 13:04 ` [PATCH 2/3] nvme-fabrics: error out when connect I/O fails Chaitanya Kulkarni
@ 2023-05-23 6:58 ` Christoph Hellwig
0 siblings, 0 replies; 8+ messages in thread
From: Christoph Hellwig @ 2023-05-23 6:58 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: hare, linux-nvme, hch, sagi
On Mon, May 22, 2023 at 06:04:07AM -0700, Chaitanya Kulkarni wrote:
> In nvmf_connect_io_queue() when connect I/O commands fails we just
> log the connect error and continue processing for authentication.
>
> Add goto out_free_data after logging the connect error to error out.
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] nvme-fabrics: factor out common for auth
2023-05-22 13:04 ` [PATCH 3/3] nvme-fabrics: factor out common for auth Chaitanya Kulkarni
@ 2023-05-23 7:00 ` Christoph Hellwig
2023-05-23 8:31 ` Chaitanya Kulkarni
0 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2023-05-23 7:00 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: hare, linux-nvme, hch, sagi
On Mon, May 22, 2023 at 06:04:08AM -0700, Chaitanya Kulkarni wrote:
> nvmf_connect_admin_queue and nvmf_connect_io_queue() shares common code
> for post connect command authentication processing that includes,
> returning appropriate NVMe authentication status based on the
> command result, authentication negotiation per qid, waiting on
> negotiation per qid.
>
> Add a common helper function to reduce the code duplication with
> necessary aruments of qid and result of connect command.
>
> No functional changes in this patch.
>
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
> drivers/nvme/host/fabrics.c | 81 ++++++++++++++++---------------------
> 1 file changed, 35 insertions(+), 46 deletions(-)
>
> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
> index 9d3df63eb49a..1d48c0b77cd0 100644
> --- a/drivers/nvme/host/fabrics.c
> +++ b/drivers/nvme/host/fabrics.c
> @@ -433,6 +433,39 @@ static void nvmf_connect_cmd_prep(struct nvme_ctrl *ctrl, u16 qid,
> cmd->connect.cattr |= NVME_CONNECT_DISABLE_SQFLOW;
> }
>
> +static u16 nvmf_auth_post_queue_connect(struct nvme_ctrl *ctrl, u16 qid,
> + u32 result)
> +{
> + int ret;
> +
> + if (!(result & (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)))
> + return NVME_SC_SUCCESS;
While it's a minor duplication, I'd prepfer to keep this check in the
callers. That makes it very explicit we are only calling into the helper
when authentication is required.
> + ret = nvme_auth_wait(ctrl, qid);
> + if (ret) {
> + dev_warn(ctrl->device, "qid %u: authentication failed\n", qid);
> + return ret;
> + }
> + if (!ret && qid == 0)
ret must be 0 at this point bsed on the check above.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] nvme-fabrics: factor out common for auth
2023-05-23 7:00 ` Christoph Hellwig
@ 2023-05-23 8:31 ` Chaitanya Kulkarni
0 siblings, 0 replies; 8+ messages in thread
From: Chaitanya Kulkarni @ 2023-05-23 8:31 UTC (permalink / raw)
To: Christoph Hellwig, Chaitanya Kulkarni; +Cc: hare, linux-nvme, sagi
On 5/23/2023 12:00 AM, Christoph Hellwig wrote:
> On Mon, May 22, 2023 at 06:04:08AM -0700, Chaitanya Kulkarni wrote:
>> nvmf_connect_admin_queue and nvmf_connect_io_queue() shares common code
>> for post connect command authentication processing that includes,
>> returning appropriate NVMe authentication status based on the
>> command result, authentication negotiation per qid, waiting on
>> negotiation per qid.
>>
>> Add a common helper function to reduce the code duplication with
>> necessary aruments of qid and result of connect command.
>>
>> No functional changes in this patch.
>>
>> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
>> ---
>> drivers/nvme/host/fabrics.c | 81 ++++++++++++++++---------------------
>> 1 file changed, 35 insertions(+), 46 deletions(-)
>>
>> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
>> index 9d3df63eb49a..1d48c0b77cd0 100644
>> --- a/drivers/nvme/host/fabrics.c
>> +++ b/drivers/nvme/host/fabrics.c
>> @@ -433,6 +433,39 @@ static void nvmf_connect_cmd_prep(struct nvme_ctrl *ctrl, u16 qid,
>> cmd->connect.cattr |= NVME_CONNECT_DISABLE_SQFLOW;
>> }
>>
>> +static u16 nvmf_auth_post_queue_connect(struct nvme_ctrl *ctrl, u16 qid,
>> + u32 result)
>> +{
>> + int ret;
>> +
>> + if (!(result & (NVME_CONNECT_AUTHREQ_ATR | NVME_CONNECT_AUTHREQ_ASCR)))
>> + return NVME_SC_SUCCESS;
>
> While it's a minor duplication, I'd prepfer to keep this check in the
> callers. That makes it very explicit we are only calling into the helper
> when authentication is required.
>
we are cleaning up so much code I guess addition of 2 lines in
send/receive callers should be fine, I'll add it in the V2.
>> + ret = nvme_auth_wait(ctrl, qid);
>> + if (ret) {
>> + dev_warn(ctrl->device, "qid %u: authentication failed\n", qid);
>> + return ret;
>> + }
>> + if (!ret && qid == 0)
>
> ret must be 0 at this point bsed on the check above.
will fix it in V2 .
-ck
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2023-05-23 8:31 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-22 13:04 [PATCH 0/3] nvme-fabrics: auth fixes and cleanup Chaitanya Kulkarni
2023-05-22 13:04 ` [PATCH 1/3] nvme-fabrics: fix qid in error message Chaitanya Kulkarni
2023-05-23 6:58 ` Christoph Hellwig
2023-05-22 13:04 ` [PATCH 2/3] nvme-fabrics: error out when connect I/O fails Chaitanya Kulkarni
2023-05-23 6:58 ` Christoph Hellwig
2023-05-22 13:04 ` [PATCH 3/3] nvme-fabrics: factor out common for auth Chaitanya Kulkarni
2023-05-23 7:00 ` Christoph Hellwig
2023-05-23 8:31 ` Chaitanya Kulkarni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).