All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
@ 2022-04-06 16:41 Chaitanya Kulkarni
  2022-04-06 16:52 ` Keith Busch
  0 siblings, 1 reply; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-06 16:41 UTC (permalink / raw)
  To: linux-nvme; +Cc: hch, kbusch, alan.adamson, Chaitanya Kulkarni

From: Christoph Hellwig <hch@lst.de>

Since addition of the nvme_log_error() we are getting error message
when running block tests framework due to internal passthru commands :-

[  612.754938] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"
[  616.361730] run blktests nvme/012 at 2022-04-06 09:26:43
[  616.382902] loop0: detected capacity change from 0 to 2097152
[  616.392680] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[  616.400913] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN testhostnqn.
[  616.401001] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR

[  627.427947] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[  627.437084] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN testhostnqn.
[  627.437161] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) MORE DNR
[  627.438984] nvme nvme1: creating 48 I/O queues.
[  627.442620] nvme nvme1: new ctrl: "blktests-subsystem-1"
[  628.506885] XFS (nvme1n1): Mounting V5 Filesystem
[  628.516895] XFS (nvme1n1): Ending clean mount
[  628.519966] xfs filesystem being mounted at /mnt/blktests supports timestamps until 2038 (0x7fffffff)
[  704.852721] XFS (nvme1n1): Unmounting Filesystem
[  704.864724] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"

This patch adds the passthru request with RQF_QUIET flag and ignores
the error reporting for nvme_log_error() if RQF_QUIET flag is set
when request is failed. With this patch we don't get the above error
message.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
[kch: make a formal patch & test with blktests]
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/nvme/host/core.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f204c6f78b5b..b913a89c743e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
 {
 	blk_status_t status = nvme_error_status(nvme_req(req)->status);
 
-	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
+	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
+		    !(req->rq_flags & RQF_QUIET)))
 		nvme_log_error(req);
 	nvme_end_req_zoned(req);
 	nvme_trace_bio_complete(req);
@@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
 	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
 
 	req->cmd_flags |= REQ_FAILFAST_DRIVER;
+	req->rq_flags |= RQF_QUIET;
 	if (req->mq_hctx->type == HCTX_TYPE_POLL)
 		req->cmd_flags |= REQ_POLLED;
 	nvme_clear_nvme_request(req);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 16:41 [PATCH] nvme-core: mark passthru requests RQF_QUIET flag Chaitanya Kulkarni
@ 2022-04-06 16:52 ` Keith Busch
  2022-04-06 17:01   ` Chaitanya Kulkarni
  2022-04-06 17:03   ` Chaitanya Kulkarni
  0 siblings, 2 replies; 23+ messages in thread
From: Keith Busch @ 2022-04-06 16:52 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, alan.adamson

On Wed, Apr 06, 2022 at 09:41:09AM -0700, Chaitanya Kulkarni wrote:
> @@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
>  {
>  	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>  
> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
> +		    !(req->rq_flags & RQF_QUIET)))
>  		nvme_log_error(req);
>  	nvme_end_req_zoned(req);
>  	nvme_trace_bio_complete(req);
> @@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
>  	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
>  
>  	req->cmd_flags |= REQ_FAILFAST_DRIVER;
> +	req->rq_flags |= RQF_QUIET;

This defeats the admin error logging logic since every admin command comes
through here. If you're sure we should do this, then I suppose you can remove
that unreachable code.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 16:52 ` Keith Busch
@ 2022-04-06 17:01   ` Chaitanya Kulkarni
  2022-04-06 17:21     ` Keith Busch
  2022-04-06 17:03   ` Chaitanya Kulkarni
  1 sibling, 1 reply; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-06 17:01 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-nvme, hch, alan.adamson, Chaitanya Kulkarni

On 4/6/22 09:52, Keith Busch wrote:
> On Wed, Apr 06, 2022 at 09:41:09AM -0700, Chaitanya Kulkarni wrote:
>> @@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
>>   {
>>   	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>>   
>> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
>> +	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
>> +		    !(req->rq_flags & RQF_QUIET)))
>>   		nvme_log_error(req);
>>   	nvme_end_req_zoned(req);
>>   	nvme_trace_bio_complete(req);
>> @@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
>>   	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
>>   
>>   	req->cmd_flags |= REQ_FAILFAST_DRIVER;
>> +	req->rq_flags |= RQF_QUIET;
> 
> This defeats the admin error logging logic since every admin command comes
> through here. If you're sure we should do this, then I suppose you can remove
> that unreachable code.

If you point out the unreachable code that will be great,
I'll keep looking meanwhile...

-ck


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 16:52 ` Keith Busch
  2022-04-06 17:01   ` Chaitanya Kulkarni
@ 2022-04-06 17:03   ` Chaitanya Kulkarni
  1 sibling, 0 replies; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-06 17:03 UTC (permalink / raw)
  To: Keith Busch, Chaitanya Kulkarni; +Cc: linux-nvme, hch, alan.adamson

On 4/6/22 09:52, Keith Busch wrote:
> On Wed, Apr 06, 2022 at 09:41:09AM -0700, Chaitanya Kulkarni wrote:
>> @@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
>>   {
>>   	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>>   
>> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
>> +	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
>> +		    !(req->rq_flags & RQF_QUIET)))
>>   		nvme_log_error(req);
>>   	nvme_end_req_zoned(req);
>>   	nvme_trace_bio_complete(req);
>> @@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
>>   	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
>>   
>>   	req->cmd_flags |= REQ_FAILFAST_DRIVER;
>> +	req->rq_flags |= RQF_QUIET;
> 
> This defeats the admin error logging logic since every admin command comes
> through here. If you're sure we should do this, then I suppose you can remove
> that unreachable code.


Perhaps we should do the req->rq_flags |= RQF_QUIET in the respective
callers ? just thinking out loudly..

-ck


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 17:01   ` Chaitanya Kulkarni
@ 2022-04-06 17:21     ` Keith Busch
  2022-04-06 22:06       ` Alan Adamson
  0 siblings, 1 reply; 23+ messages in thread
From: Keith Busch @ 2022-04-06 17:21 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, alan.adamson

On Wed, Apr 06, 2022 at 05:01:46PM +0000, Chaitanya Kulkarni wrote:
> On 4/6/22 09:52, Keith Busch wrote:
> > On Wed, Apr 06, 2022 at 09:41:09AM -0700, Chaitanya Kulkarni wrote:
> >> @@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
> >>   {
> >>   	blk_status_t status = nvme_error_status(nvme_req(req)->status);
> >>   
> >> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> >> +	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
> >> +		    !(req->rq_flags & RQF_QUIET)))
> >>   		nvme_log_error(req);
> >>   	nvme_end_req_zoned(req);
> >>   	nvme_trace_bio_complete(req);
> >> @@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
> >>   	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
> >>   
> >>   	req->cmd_flags |= REQ_FAILFAST_DRIVER;
> >> +	req->rq_flags |= RQF_QUIET;
> > 
> > This defeats the admin error logging logic since every admin command comes
> > through here. If you're sure we should do this, then I suppose you can remove
> > that unreachable code.
> 
> If you point out the unreachable code that will be great,
> I'll keep looking meanwhile...

The second half of nvme_log_error(), plus nvme_get_admin_opcode_str() and the
array it defines are unreachable since all admin commands don't log errors with
this change.

You could skip the RQF_QUIET setting and check blk_rq_is_passthrough() instead.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 17:21     ` Keith Busch
@ 2022-04-06 22:06       ` Alan Adamson
  2022-04-06 22:16         ` Chaitanya Kulkarni
  2022-04-07  8:51         ` hch
  0 siblings, 2 replies; 23+ messages in thread
From: Alan Adamson @ 2022-04-06 22:06 UTC (permalink / raw)
  To: Keith Busch; +Cc: Chaitanya Kulkarni, linux-nvme, hch



> On Apr 6, 2022, at 10:21 AM, Keith Busch <kbusch@kernel.org> wrote:
> 
> On Wed, Apr 06, 2022 at 05:01:46PM +0000, Chaitanya Kulkarni wrote:
>> On 4/6/22 09:52, Keith Busch wrote:
>>> On Wed, Apr 06, 2022 at 09:41:09AM -0700, Chaitanya Kulkarni wrote:
>>>> @@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
>>>>  {
>>>>  	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>>>> 
>>>> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
>>>> +	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
>>>> +		    !(req->rq_flags & RQF_QUIET)))
>>>>  		nvme_log_error(req);
>>>>  	nvme_end_req_zoned(req);
>>>>  	nvme_trace_bio_complete(req);
>>>> @@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
>>>>  	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
>>>> 
>>>>  	req->cmd_flags |= REQ_FAILFAST_DRIVER;
>>>> +	req->rq_flags |= RQF_QUIET;
>>> 
>>> This defeats the admin error logging logic since every admin command comes
>>> through here. If you're sure we should do this, then I suppose you can remove
>>> that unreachable code.
>> 
>> If you point out the unreachable code that will be great,
>> I'll keep looking meanwhile...
> 
> The second half of nvme_log_error(), plus nvme_get_admin_opcode_str() and the
> array it defines are unreachable since all admin commands don't log errors with
> this change.
> 
> You could skip the RQF_QUIET setting and check blk_rq_is_passthrough() instead.

Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme admin-passthru command will log an error.
I ran into this using the blktests I’m coding up for verbose errors.  Is this the behavior we want?

Alan


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 22:06       ` Alan Adamson
@ 2022-04-06 22:16         ` Chaitanya Kulkarni
  2022-04-06 23:29           ` Alan Adamson
  2022-04-07  8:48           ` Christoph Hellwig
  2022-04-07  8:51         ` hch
  1 sibling, 2 replies; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-06 22:16 UTC (permalink / raw)
  To: Alan Adamson, Keith Busch, Christoph Hellwig, Hannes Reinecke,
	Sagi Grimberg
  Cc: linux-nvme, hch

On 4/6/22 15:06, Alan Adamson wrote:
> 
> 
>> On Apr 6, 2022, at 10:21 AM, Keith Busch <kbusch@kernel.org> wrote:
>>
>> On Wed, Apr 06, 2022 at 05:01:46PM +0000, Chaitanya Kulkarni wrote:
>>> On 4/6/22 09:52, Keith Busch wrote:
>>>> On Wed, Apr 06, 2022 at 09:41:09AM -0700, Chaitanya Kulkarni wrote:
>>>>> @@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
>>>>>   {
>>>>>   	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>>>>>
>>>>> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
>>>>> +	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
>>>>> +		    !(req->rq_flags & RQF_QUIET)))
>>>>>   		nvme_log_error(req);
>>>>>   	nvme_end_req_zoned(req);
>>>>>   	nvme_trace_bio_complete(req);
>>>>> @@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
>>>>>   	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
>>>>>
>>>>>   	req->cmd_flags |= REQ_FAILFAST_DRIVER;
>>>>> +	req->rq_flags |= RQF_QUIET;
>>>>
>>>> This defeats the admin error logging logic since every admin command comes
>>>> through here. If you're sure we should do this, then I suppose you can remove
>>>> that unreachable code.
>>>
>>> If you point out the unreachable code that will be great,
>>> I'll keep looking meanwhile...
>>
>> The second half of nvme_log_error(), plus nvme_get_admin_opcode_str() and the
>> array it defines are unreachable since all admin commands don't log errors with
>> this change.
>>
>> You could skip the RQF_QUIET setting and check blk_rq_is_passthrough() instead.
> 
> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme admin-passthru command will log an error.
> I ran into this using the blktests I’m coding up for verbose errors.  Is this the behavior we want?
> 
> Alan
> 

Sagi/Christoph/Hannes/Keith,

After debugging the issue following patch [1] makes errors disappear.

But I'm not sure if this behavior aligns with protocol or not.

I'll keep digging, meanwhile if anyone has an idea please provide a 
review on patch in [1] if make sense or not.

-ck

[1] mask invalid NVME_ID_CNS_CS_CTRL errors.

diff --git a/drivers/nvme/target/admin-cmd.c 
b/drivers/nvme/target/admin-cmd.c
index 397daaf51f1b..e5eea2f0ac9c 100644
--- a/drivers/nvme/target/admin-cmd.c
+++ b/drivers/nvme/target/admin-cmd.c
@@ -718,9 +718,16 @@ static void nvmet_execute_identify(struct nvmet_req 
*req)
                         switch (req->cmd->identify.csi) {
                         case NVME_CSI_ZNS:
                                 return 
nvmet_execute_identify_cns_cs_ctrl(req);
+                       case NVME_CSI_NVM:
+                               return nvmet_execute_identify_ctrl(req);
                         default:
                                 break;
                         }
+               } else {
+                       switch (req->cmd->identify.csi) {
+                       case NVME_CSI_NVM:
+                               return nvmet_execute_identify_ctrl(req);
+                       }
                 }
                 break;
         case NVME_ID_CNS_NS_ACTIVE_LIST:
diff --git a/drivers/nvme/target/discovery.c 
b/drivers/nvme/target/discovery.c
index c2162eef8ce1..34c7ed055674 100644
--- a/drivers/nvme/target/discovery.c
+++ b/drivers/nvme/target/discovery.c
@@ -254,7 +254,11 @@ static void nvmet_execute_disc_identify(struct 
nvmet_req *req)
         if (!nvmet_check_transfer_len(req, NVME_IDENTIFY_DATA_SIZE))
                 return;

-       if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
+       switch (req->cmd->identify.cns) {
+       case NVME_ID_CNS_CTRL:
+       case NVME_ID_CNS_CS_CTRL:
+               break;
+       default:
                 req->error_loc = offsetof(struct nvme_identify, cns);
                 status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
                 goto out;


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 22:16         ` Chaitanya Kulkarni
@ 2022-04-06 23:29           ` Alan Adamson
  2022-04-07 19:40             ` Chaitanya Kulkarni
  2022-04-07  8:48           ` Christoph Hellwig
  1 sibling, 1 reply; 23+ messages in thread
From: Alan Adamson @ 2022-04-06 23:29 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: Keith Busch, Christoph Hellwig, Hannes Reinecke, Sagi Grimberg,
	linux-nvme

I tried the patch with my tests.  It resolves the error logging with nvme/012.  My verbose errors tests (nvme/039) now passes,
but we still see the error when the system boots.


root@localhost blktests]# ./check nvme/012
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  39.180s  ...  41.432s
[root@localhost blktests]#


[   77.946505] run blktests nvme/012 at 2022-04-06 19:22:23
[   78.013092] loop: module loaded
[   78.014344] loop0: detected capacity change from 0 to 2097152
[   78.024875] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[   78.052874] nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:00000000-0000-0000-0000-000000000000.
[   78.053107] nvme nvme1: creating 1 I/O queues.
[   78.053182] nvme nvme1: new ctrl: "blktests-subsystem-1"
[   79.125929] XFS (nvme1n1): Mounting V5 Filesystem
[   79.128321] XFS (nvme1n1): Ending clean mount
[   79.128480] xfs filesystem being mounted at /mnt/blktests supports timestamps until 2038 (0x7fffffff)
[  119.253246] XFS (nvme1n1): Unmounting Filesystem
[  119.264005] nvme nvme1: Removing ctrl: NQN "blktests-subsystem-1"


Verbose Errors blktests
=======================
[root@localhost blktests]# ./check nvme/039
nvme/039 => nvme0n1 (test error logging)                     [passed]
    runtime  0.060s  ...  0.058s
[root@localhost blktests]#

Boot
====
[    3.244056] nvme0: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2) DNR


Alan



~                 

> On Apr 6, 2022, at 3:16 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
> 
> On 4/6/22 15:06, Alan Adamson wrote:
>> 
>> 
>>> On Apr 6, 2022, at 10:21 AM, Keith Busch <kbusch@kernel.org> wrote:
>>> 
>>> On Wed, Apr 06, 2022 at 05:01:46PM +0000, Chaitanya Kulkarni wrote:
>>>> On 4/6/22 09:52, Keith Busch wrote:
>>>>> On Wed, Apr 06, 2022 at 09:41:09AM -0700, Chaitanya Kulkarni wrote:
>>>>>> @@ -370,7 +370,8 @@ static inline void nvme_end_req(struct request *req)
>>>>>>  {
>>>>>>  	blk_status_t status = nvme_error_status(nvme_req(req)->status);
>>>>>> 
>>>>>> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
>>>>>> +	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS &&
>>>>>> +		    !(req->rq_flags & RQF_QUIET)))
>>>>>>  		nvme_log_error(req);
>>>>>>  	nvme_end_req_zoned(req);
>>>>>>  	nvme_trace_bio_complete(req);
>>>>>> @@ -651,6 +652,7 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
>>>>>>  	cmd->common.flags &= ~NVME_CMD_SGL_ALL;
>>>>>> 
>>>>>>  	req->cmd_flags |= REQ_FAILFAST_DRIVER;
>>>>>> +	req->rq_flags |= RQF_QUIET;
>>>>> 
>>>>> This defeats the admin error logging logic since every admin command comes
>>>>> through here. If you're sure we should do this, then I suppose you can remove
>>>>> that unreachable code.
>>>> 
>>>> If you point out the unreachable code that will be great,
>>>> I'll keep looking meanwhile...
>>> 
>>> The second half of nvme_log_error(), plus nvme_get_admin_opcode_str() and the
>>> array it defines are unreachable since all admin commands don't log errors with
>>> this change.
>>> 
>>> You could skip the RQF_QUIET setting and check blk_rq_is_passthrough() instead.
>> 
>> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme admin-passthru command will log an error.
>> I ran into this using the blktests I’m coding up for verbose errors.  Is this the behavior we want?
>> 
>> Alan
>> 
> 
> Sagi/Christoph/Hannes/Keith,
> 
> After debugging the issue following patch [1] makes errors disappear.
> 
> But I'm not sure if this behavior aligns with protocol or not.
> 
> I'll keep digging, meanwhile if anyone has an idea please provide a 
> review on patch in [1] if make sense or not.
> 
> -ck
> 
> [1] mask invalid NVME_ID_CNS_CS_CTRL errors.
> 
> diff --git a/drivers/nvme/target/admin-cmd.c 
> b/drivers/nvme/target/admin-cmd.c
> index 397daaf51f1b..e5eea2f0ac9c 100644
> --- a/drivers/nvme/target/admin-cmd.c
> +++ b/drivers/nvme/target/admin-cmd.c
> @@ -718,9 +718,16 @@ static void nvmet_execute_identify(struct nvmet_req 
> *req)
>                         switch (req->cmd->identify.csi) {
>                         case NVME_CSI_ZNS:
>                                 return 
> nvmet_execute_identify_cns_cs_ctrl(req);
> +                       case NVME_CSI_NVM:
> +                               return nvmet_execute_identify_ctrl(req);
>                         default:
>                                 break;
>                         }
> +               } else {
> +                       switch (req->cmd->identify.csi) {
> +                       case NVME_CSI_NVM:
> +                               return nvmet_execute_identify_ctrl(req);
> +                       }
>                 }
>                 break;
>         case NVME_ID_CNS_NS_ACTIVE_LIST:
> diff --git a/drivers/nvme/target/discovery.c 
> b/drivers/nvme/target/discovery.c
> index c2162eef8ce1..34c7ed055674 100644
> --- a/drivers/nvme/target/discovery.c
> +++ b/drivers/nvme/target/discovery.c
> @@ -254,7 +254,11 @@ static void nvmet_execute_disc_identify(struct 
> nvmet_req *req)
>         if (!nvmet_check_transfer_len(req, NVME_IDENTIFY_DATA_SIZE))
>                 return;
> 
> -       if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
> +       switch (req->cmd->identify.cns) {
> +       case NVME_ID_CNS_CTRL:
> +       case NVME_ID_CNS_CS_CTRL:
> +               break;
> +       default:
>                 req->error_loc = offsetof(struct nvme_identify, cns);
>                 status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
>                 goto out;
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 22:16         ` Chaitanya Kulkarni
  2022-04-06 23:29           ` Alan Adamson
@ 2022-04-07  8:48           ` Christoph Hellwig
  1 sibling, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2022-04-07  8:48 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: Alan Adamson, Keith Busch, Christoph Hellwig, Hannes Reinecke,
	Sagi Grimberg, linux-nvme

This won't help with other controllers.  But it still is a useful
patch, so please submit it separately.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 22:06       ` Alan Adamson
  2022-04-06 22:16         ` Chaitanya Kulkarni
@ 2022-04-07  8:51         ` hch
  2022-04-07 11:52           ` Sagi Grimberg
  2022-04-07 16:15           ` Alan Adamson
  1 sibling, 2 replies; 23+ messages in thread
From: hch @ 2022-04-07  8:51 UTC (permalink / raw)
  To: Alan Adamson; +Cc: Keith Busch, Chaitanya Kulkarni, linux-nvme, hch

On Wed, Apr 06, 2022 at 10:06:06PM +0000, Alan Adamson wrote:
> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme admin-passthru command will log an error.
> I ran into this using the blktests I’m coding up for verbose errors.  Is this the behavior we want?

Well, you submitted the logging so we're curious about your use case.
SCSI skips logging errors for internally submitted commands, but not for
userspace passthrough.  So we could move the RQF_QUIET into
__nvme_submit_sync_cmd / nvme_keep_alive_work / nvme_timeout / etc.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-07  8:51         ` hch
@ 2022-04-07 11:52           ` Sagi Grimberg
  2022-04-07 20:10             ` Chaitanya Kulkarni
  2022-04-07 16:15           ` Alan Adamson
  1 sibling, 1 reply; 23+ messages in thread
From: Sagi Grimberg @ 2022-04-07 11:52 UTC (permalink / raw)
  To: hch, Alan Adamson; +Cc: Keith Busch, Chaitanya Kulkarni, linux-nvme


>> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme admin-passthru command will log an error.
>> I ran into this using the blktests I’m coding up for verbose errors.  Is this the behavior we want?
> 
> Well, you submitted the logging so we're curious about your use case.
> SCSI skips logging errors for internally submitted commands, but not for
> userspace passthrough.  So we could move the RQF_QUIET into
> __nvme_submit_sync_cmd / nvme_keep_alive_work / nvme_timeout / etc.
> 

Makes sense to me


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-07  8:51         ` hch
  2022-04-07 11:52           ` Sagi Grimberg
@ 2022-04-07 16:15           ` Alan Adamson
  1 sibling, 0 replies; 23+ messages in thread
From: Alan Adamson @ 2022-04-07 16:15 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Keith Busch, Chaitanya Kulkarni, linux-nvme



> On Apr 7, 2022, at 1:51 AM, hch@lst.de wrote:
> 
> On Wed, Apr 06, 2022 at 10:06:06PM +0000, Alan Adamson wrote:
>> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme admin-passthru command will log an error.
>> I ran into this using the blktests I’m coding up for verbose errors.  Is this the behavior we want?
> 
> Well, you submitted the logging so we're curious about your use case.
> SCSI skips logging errors for internally submitted commands, but not for
> userspace passthrough.  So we could move the RQF_QUIET into
> __nvme_submit_sync_cmd / nvme_keep_alive_work / nvme_timeout / etc.

I think that's what we want, skip logging for nvme driver submitted commands, and allow
logging for passthrough commands.

Alan


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-06 23:29           ` Alan Adamson
@ 2022-04-07 19:40             ` Chaitanya Kulkarni
  0 siblings, 0 replies; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-07 19:40 UTC (permalink / raw)
  To: Alan Adamson
  Cc: Keith Busch, Christoph Hellwig, Hannes Reinecke, Sagi Grimberg,
	linux-nvme

On 4/6/22 16:29, Alan Adamson wrote:
> I tried the patch with my tests.  It resolves the error logging with nvme/012.  My verbose errors tests (nvme/039) now passes,
> but we still see the error when the system boots.
> 

yes, that is something we need to fix, also it'll be great if we can
avoid top posting?

To fix this I've suggested fixing the call sites see [1].

-ck

[1]
http://lists.infradead.org/pipermail/linux-nvme/2022-April/031160.html



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-07 11:52           ` Sagi Grimberg
@ 2022-04-07 20:10             ` Chaitanya Kulkarni
  2022-04-07 20:35               ` Alan Adamson
  0 siblings, 1 reply; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-07 20:10 UTC (permalink / raw)
  To: Sagi Grimberg, hch; +Cc: Keith Busch, Alan Adamson, linux-nvme

On 4/7/22 04:52, Sagi Grimberg wrote:
> 
>>> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme 
>>> admin-passthru command will log an error.
>>> I ran into this using the blktests I’m coding up for verbose errors.  
>>> Is this the behavior we want?
>>
>> Well, you submitted the logging so we're curious about your use case.
>> SCSI skips logging errors for internally submitted commands, but not for
>> userspace passthrough.  So we could move the RQF_QUIET into
>> __nvme_submit_sync_cmd / nvme_keep_alive_work / nvme_timeout / etc.
>>
> 
> Makes sense to me

This was my initial proposal to fix the call sites with internal
passthru commands [1], so this should work.

-ck

[1]
http://lists.infradead.org/pipermail/linux-nvme/2022-April/031160.html



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-07 20:10             ` Chaitanya Kulkarni
@ 2022-04-07 20:35               ` Alan Adamson
  2022-04-07 21:00                 ` Chaitanya Kulkarni
  0 siblings, 1 reply; 23+ messages in thread
From: Alan Adamson @ 2022-04-07 20:35 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme



> On Apr 7, 2022, at 1:10 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
> 
> On 4/7/22 04:52, Sagi Grimberg wrote:
>> 
>>>> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme 
>>>> admin-passthru command will log an error.
>>>> I ran into this using the blktests I’m coding up for verbose errors.  
>>>> Is this the behavior we want?
>>> 
>>> Well, you submitted the logging so we're curious about your use case.
>>> SCSI skips logging errors for internally submitted commands, but not for
>>> userspace passthrough.  So we could move the RQF_QUIET into
>>> __nvme_submit_sync_cmd / nvme_keep_alive_work / nvme_timeout / etc.
>>> 
>> 
>> Makes sense to me
> 
> This was my initial proposal to fix the call sites with internal
> passthru commands [1], so this should work.
> 
> -ck
> 
> [1]
> http://lists.infradead.org/pipermail/linux-nvme/2022-April/031160.html
> 
> 

It works (suppresses logging) for driver initiated admin cmds which is good, but it also suppresses logging
when running "nvme admin-passthru” which I don’t think we want.

Alan


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-07 20:35               ` Alan Adamson
@ 2022-04-07 21:00                 ` Chaitanya Kulkarni
  2022-04-07 21:13                   ` Alan Adamson
  0 siblings, 1 reply; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-07 21:00 UTC (permalink / raw)
  To: Alan Adamson; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme

On 4/7/22 13:35, Alan Adamson wrote:
> 
> 
>> On Apr 7, 2022, at 1:10 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
>>
>> On 4/7/22 04:52, Sagi Grimberg wrote:
>>>
>>>>> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme
>>>>> admin-passthru command will log an error.
>>>>> I ran into this using the blktests I’m coding up for verbose errors.
>>>>> Is this the behavior we want?
>>>>
>>>> Well, you submitted the logging so we're curious about your use case.
>>>> SCSI skips logging errors for internally submitted commands, but not for
>>>> userspace passthrough.  So we could move the RQF_QUIET into
>>>> __nvme_submit_sync_cmd / nvme_keep_alive_work / nvme_timeout / etc.
>>>>
>>>
>>> Makes sense to me
>>
>> This was my initial proposal to fix the call sites with internal
>> passthru commands [1], so this should work.
>>
>> -ck
>>
>> [1]
>> http://lists.infradead.org/pipermail/linux-nvme/2022-April/031160.html
>>
>>
> 
> It works (suppresses logging) for driver initiated admin cmds which is good, but it also suppresses logging
> when running "nvme admin-passthru” which I don’t think we want.
> 
> Alan
> 

I think nvme admin-passthru and io-passthru uses different
interface than internal passthru commnds i.e.

nvme admin-passthru :- NVME_IOCTL_ADMIN

  nvme_submit_user_cmd()
   req allocation and submission no internal passthru cmds.

nvme io-passthru:- NVME_IOCTL_IO_CMD

  nvme_user_cmd()
   nvme_submit_user_cmd()
    req allocation and submission, no internal passthru cmds.

The internal passthru interface uses it's own request allocation
and submission, so if we change that it should not affect the
nvme admin-io passthru interface... unless we are using something
else for internal passthru which is not going through 
__nvme_submit_sync_cmd() ultimately.

In short what I meant is following patch, this should help
us remove the boot time messages you have reported and
blktest warnings also, can you please confirm ?

if not then I'll keep digging..

nvme (nvme-5.18) # git diff
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f204c6f78b5b..a1ea2f736d42 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
  {
         blk_status_t status = nvme_error_status(nvme_req(req)->status);

-       if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
+       if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
                 nvme_log_error(req);
         nvme_end_req_zoned(req);
         nvme_trace_bio_complete(req);
@@ -1086,9 +1086,11 @@ int __nvme_submit_sync_cmd(struct request_queue 
*q, struct nvme_command *cmd,
         else
                 req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
                                                 qid ? qid - 1 : 0);
-
         if (IS_ERR(req))
                 return PTR_ERR(req);
+
+       req->rq_flags |= RQF_QUIET;
+
         nvme_init_request(req, cmd);

         if (timeout)


if not then I'll keep digging..

-ck




^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-07 21:00                 ` Chaitanya Kulkarni
@ 2022-04-07 21:13                   ` Alan Adamson
  2022-04-08  2:13                     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 23+ messages in thread
From: Alan Adamson @ 2022-04-07 21:13 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme



> On Apr 7, 2022, at 2:00 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
> 
> On 4/7/22 13:35, Alan Adamson wrote:
>> 
>> 
>>> On Apr 7, 2022, at 1:10 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
>>> 
>>> On 4/7/22 04:52, Sagi Grimberg wrote:
>>>> 
>>>>>> Using RQF_QUIET or blk_rq_is_passrhrough() will mean no nvme
>>>>>> admin-passthru command will log an error.
>>>>>> I ran into this using the blktests I’m coding up for verbose errors.
>>>>>> Is this the behavior we want?
>>>>> 
>>>>> Well, you submitted the logging so we're curious about your use case.
>>>>> SCSI skips logging errors for internally submitted commands, but not for
>>>>> userspace passthrough.  So we could move the RQF_QUIET into
>>>>> __nvme_submit_sync_cmd / nvme_keep_alive_work / nvme_timeout / etc.
>>>>> 
>>>> 
>>>> Makes sense to me
>>> 
>>> This was my initial proposal to fix the call sites with internal
>>> passthru commands [1], so this should work.
>>> 
>>> -ck
>>> 
>>> [1]
>>> http://lists.infradead.org/pipermail/linux-nvme/2022-April/031160.html
>>> 
>>> 
>> 
>> It works (suppresses logging) for driver initiated admin cmds which is good, but it also suppresses logging
>> when running "nvme admin-passthru” which I don’t think we want.
>> 
>> Alan
>> 
> 
> I think nvme admin-passthru and io-passthru uses different
> interface than internal passthru commnds i.e.
> 
> nvme admin-passthru :- NVME_IOCTL_ADMIN
> 
>  nvme_submit_user_cmd()
>   req allocation and submission no internal passthru cmds.
> 
> nvme io-passthru:- NVME_IOCTL_IO_CMD
> 
>  nvme_user_cmd()
>   nvme_submit_user_cmd()
>    req allocation and submission, no internal passthru cmds.
> 
> The internal passthru interface uses it's own request allocation
> and submission, so if we change that it should not affect the
> nvme admin-io passthru interface... unless we are using something
> else for internal passthru which is not going through 
> __nvme_submit_sync_cmd() ultimately.
> 
> In short what I meant is following patch, this should help
> us remove the boot time messages you have reported and
> blktest warnings also, can you please confirm ?
> 
> if not then I'll keep digging..
> 
> nvme (nvme-5.18) # git diff
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index f204c6f78b5b..a1ea2f736d42 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>  {
>         blk_status_t status = nvme_error_status(nvme_req(req)->status);
> 
> -       if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +       if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>                 nvme_log_error(req);
>         nvme_end_req_zoned(req);
>         nvme_trace_bio_complete(req);
> @@ -1086,9 +1086,11 @@ int __nvme_submit_sync_cmd(struct request_queue 
> *q, struct nvme_command *cmd,
>         else
>                 req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
>                                                 qid ? qid - 1 : 0);
> -
>         if (IS_ERR(req))
>                 return PTR_ERR(req);
> +
> +       req->rq_flags |= RQF_QUIET;
> +
>         nvme_init_request(req, cmd);
> 
>         if (timeout)
> 
> 
> if not then I'll keep digging..
> 
> -ck

I just tried the patch on my config, it properly suppresses the bootup message, but it also supresses messages from "nvme admin-passthru”.

Alan


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-07 21:13                   ` Alan Adamson
@ 2022-04-08  2:13                     ` Chaitanya Kulkarni
  2022-04-08 16:24                       ` Alan Adamson
  0 siblings, 1 reply; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-08  2:13 UTC (permalink / raw)
  To: Alan Adamson; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme


>> nvme (nvme-5.18) # git diff
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index f204c6f78b5b..a1ea2f736d42 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>>   {
>>          blk_status_t status = nvme_error_status(nvme_req(req)->status);
>>
>> -       if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
>> +       if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>>                  nvme_log_error(req);
>>          nvme_end_req_zoned(req);
>>          nvme_trace_bio_complete(req);
>> @@ -1086,9 +1086,11 @@ int __nvme_submit_sync_cmd(struct request_queue
>> *q, struct nvme_command *cmd,
>>          else
>>                  req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
>>                                                  qid ? qid - 1 : 0);
>> -
>>          if (IS_ERR(req))
>>                  return PTR_ERR(req);
>> +
>> +       req->rq_flags |= RQF_QUIET;
>> +
>>          nvme_init_request(req, cmd);
>>
>>          if (timeout)
>>
>>
>> if not then I'll keep digging..
>>
>> -ck
> 
> I just tried the patch on my config, it properly suppresses the bootup message, but it also supresses messages from "nvme admin-passthru”.
> 
> Alan
> 

Can you please share a command line for "nvme admin-passthru"
where this patch supresses messages ?

-ck



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-08  2:13                     ` Chaitanya Kulkarni
@ 2022-04-08 16:24                       ` Alan Adamson
  2022-04-09  0:10                         ` Chaitanya Kulkarni
  0 siblings, 1 reply; 23+ messages in thread
From: Alan Adamson @ 2022-04-08 16:24 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme



> On Apr 7, 2022, at 7:13 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
> 
> 
>>> nvme (nvme-5.18) # git diff
>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>>> index f204c6f78b5b..a1ea2f736d42 100644
>>> --- a/drivers/nvme/host/core.c
>>> +++ b/drivers/nvme/host/core.c
>>> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>>>  {
>>>         blk_status_t status = nvme_error_status(nvme_req(req)->status);
>>> 
>>> -       if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
>>> +       if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>>>                 nvme_log_error(req);
>>>         nvme_end_req_zoned(req);
>>>         nvme_trace_bio_complete(req);
>>> @@ -1086,9 +1086,11 @@ int __nvme_submit_sync_cmd(struct request_queue
>>> *q, struct nvme_command *cmd,
>>>         else
>>>                 req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
>>>                                                 qid ? qid - 1 : 0);
>>> -
>>>         if (IS_ERR(req))
>>>                 return PTR_ERR(req);
>>> +
>>> +       req->rq_flags |= RQF_QUIET;
>>> +
>>>         nvme_init_request(req, cmd);
>>> 
>>>         if (timeout)
>>> 
>>> 
>>> if not then I'll keep digging..
>>> 
>>> -ck
>> 
>> I just tried the patch on my config, it properly suppresses the bootup message, but it also supresses messages from "nvme admin-passthru”.
>> 
>> Alan
>> 
> 
> Can you please share a command line for "nvme admin-passthru"
> where this patch supresses messages ?

I have the NVME Fault Injector configured:

echo 0x286 > /sys/kernel/debug/${ctrl_dev}/fault_inject/status
echo 1000 > /sys/kernel/debug/${ctrl_dev}/fault_inject/times
echo 100 > /sys/kernel/debug/${ctrl_dev}/fault_inject/probability

nvme admin-passthru /dev/${ctrl_dev} --opcode=06 --data-len=4096 --cdw10=1 -r

echo 0 >  /sys/kernel/debug/${ctrl_dev}/fault_inject/probability

> 
> -ck
> 
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-08 16:24                       ` Alan Adamson
@ 2022-04-09  0:10                         ` Chaitanya Kulkarni
  2022-04-11 18:31                           ` Alan Adamson
  0 siblings, 1 reply; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-09  0:10 UTC (permalink / raw)
  To: Alan Adamson; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme


>> Can you please share a command line for "nvme admin-passthru"
>> where this patch supresses messages ?
> 
> I have the NVME Fault Injector configured:
> 
> echo 0x286 > /sys/kernel/debug/${ctrl_dev}/fault_inject/status
> echo 1000 > /sys/kernel/debug/${ctrl_dev}/fault_inject/times
> echo 100 > /sys/kernel/debug/${ctrl_dev}/fault_inject/probability
> 
> nvme admin-passthru /dev/${ctrl_dev} --opcode=06 --data-len=4096 --cdw10=1 -r
> 
> echo 0 >  /sys/kernel/debug/${ctrl_dev}/fault_inject/probability
> 

I was able to produce same admin-passthru error messages with my patch.
See below detailed execution with the script and the log, can you please
tell me what is missing ?

>> -ck
>>
>>
> 

* Without this patch I get 3 Error message from fault injection
enabled :-
1. [ 1743.353266] nvme1: Identify(0x6), Invalid Field in
    Command (sct 0x0 / sc 0x2) MORE DNR

2. [ 1744.370690] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 1000
[ 1744.370698] CPU: 41 PID: 389 Comm: kworker/41:1 Tainted: G 
OE     5.17.0-rc2nvme+ #68
[ 1744.370702] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1744.370704] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1744.370712] Call Trace:
[ 1744.370715]  <TASK>
[ 1744.370717]  dump_stack_lvl+0x48/0x5e
[ 1744.370724]  should_fail.cold+0x32/0x37
[ 1744.370729]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1744.370741]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1744.370745]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1744.370754]  process_one_work+0x1af/0x380
[ 1744.370758]  worker_thread+0x50/0x3a0
[ 1744.370761]  ? rescuer_thread+0x370/0x370
[ 1744.370763]  kthread+0xe7/0x110
[ 1744.370767]  ? kthread_complete_and_exit+0x20/0x20
[ 1744.370770]  ret_from_fork+0x22/0x30
[ 1744.370776]  </TASK>
[ 1744.370783] nvme1: Identify(0x6), Access Denied
                (sct 0x2 / sc 0x86) DNR

3. [1744.375045] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 999
[ 1744.375052] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
OE     5.17.0-rc2nvme+ #68
[ 1744.375056] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1744.375058] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1744.375066] Call Trace:
[ 1744.375070]  <TASK>
[ 1744.375072]  dump_stack_lvl+0x48/0x5e
[ 1744.375079]  should_fail.cold+0x32/0x37
[ 1744.375084]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1744.375096]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1744.375100]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1744.375108]  process_one_work+0x1af/0x380
[ 1744.375113]  worker_thread+0x50/0x3a0
[ 1744.375115]  ? rescuer_thread+0x370/0x370
[ 1744.375118]  kthread+0xe7/0x110
[ 1744.375121]  ? kthread_complete_and_exit+0x20/0x20
[ 1744.375124]  ret_from_fork+0x22/0x30
[ 1744.375130]  </TASK>
[ 1744.375148] nvme1: Identify(0x6), Access Denied
                (sct 0x2 / sc 0x86) DNR


* With this patch I only get two from fault injection which are coming
from nvme-admin-passthru and no from the internal passthru:-

1. [ 1765.175570] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 1000
[ 1765.175579] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
OE     5.17.0-rc2nvme+ #68
[ 1765.175583] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1765.175585] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1765.175593] Call Trace:
[ 1765.175596]  <TASK>
[ 1765.175599]  dump_stack_lvl+0x48/0x5e
[ 1765.175605]  should_fail.cold+0x32/0x37
[ 1765.175610]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1765.175623]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1765.175627]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1765.175636]  process_one_work+0x1af/0x380
[ 1765.175640]  worker_thread+0x50/0x3a0
[ 1765.175643]  ? rescuer_thread+0x370/0x370
[ 1765.175645]  kthread+0xe7/0x110
[ 1765.175648]  ? kthread_complete_and_exit+0x20/0x20
[ 1765.175652]  ret_from_fork+0x22/0x30
[ 1765.175658]  </TASK>
[ 1765.175664] nvme1: Identify(0x6), Access Denied
                (sct 0x2 / sc 0x86) DNR
2. [ 1765.179829] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 999
[ 1765.179835] CPU: 44 PID: 9897 Comm: kworker/44:0 Tainted: G 
  OE     5.17.0-rc2nvme+ #68
[ 1765.179839] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1765.179841] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1765.179850] Call Trace:
[ 1765.179853]  <TASK>
[ 1765.179855]  dump_stack_lvl+0x48/0x5e
[ 1765.179862]  should_fail.cold+0x32/0x37
[ 1765.179867]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1765.179879]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1765.179884]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1765.179892]  process_one_work+0x1af/0x380
[ 1765.179897]  ? rescuer_thread+0x370/0x370
[ 1765.179899]  worker_thread+0x50/0x3a0
[ 1765.179902]  ? rescuer_thread+0x370/0x370
[ 1765.179903]  kthread+0xe7/0x110
[ 1765.179907]  ? kthread_complete_and_exit+0x20/0x20
[ 1765.179911]  ret_from_fork+0x22/0x30
[ 1765.179917]  </TASK>
[ 1765.179923] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR


* Detailed test log with nvmeof nvme-loop controller configured
and fault injection enabled with and without this patch to mask
internal passthru. It shows only internal passthru error message
is masked with this patch and internal passthru error message
is present without this patch, nvme-admin-passthru error message
present in both cases :-

nvme (nvme-5.18) # sh error_inject.sh
commit 7bec02cef3d11f3d3a80bfa8739f790377bac8d6 (HEAD -> nvme-5.18)
Merge: 226d991feef9 a4a6f3c8f61c
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Fri Apr 8 16:57:45 2022 -0700

     Merge branch 'nvme-5.18' of git://git.infradead.org/nvme into nvme-5.18
+ umount /mnt/nvme0n1
+ clear_dmesg
./compile_nvme.sh: line 3: clear_dmesg: command not found
umount: /mnt/nvme0n1: no mount point specified.
+ ./delete.sh
+ NQN=testnqn
+ nvme disconnect -n testnqn
Failed to scan topoplogy: No such file or directory

real	0m0.002s
user	0m0.000s
sys	0m0.002s
+ rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
+ rmdir /sys/kernel/config/nvmet/ports/1
rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file 
or directory
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
./delete.sh: line 14: 
/sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file 
or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
rmdir: failed to remove 
'/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or 
directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such 
file or directory
+ rmdir 'config/nullb/nullb*'
rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
+ modprobe -r nvme-fabrics
+ modprobe -r nvme_loop
+ modprobe -r nvmet
+ modprobe -r nvme
+ sleep 1
+ modprobe -r nvme-core
+ lsmod
+ grep nvme
+ sleep 1
+ git diff
+ sleep 1
++ nproc
+ make -j 48 M=drivers/nvme/target/ clean
++ nproc
+ make -j 48 M=drivers/nvme/ modules
   CC [M]  drivers/nvme/target/core.o
   CC [M]  drivers/nvme/target/configfs.o
   CC [M]  drivers/nvme/target/admin-cmd.o
   CC [M]  drivers/nvme/target/fabrics-cmd.o
   CC [M]  drivers/nvme/target/discovery.o
   CC [M]  drivers/nvme/target/io-cmd-file.o
   CC [M]  drivers/nvme/target/io-cmd-bdev.o
   CC [M]  drivers/nvme/target/passthru.o
   CC [M]  drivers/nvme/target/zns.o
   CC [M]  drivers/nvme/target/trace.o
   CC [M]  drivers/nvme/target/loop.o
   CC [M]  drivers/nvme/target/rdma.o
   CC [M]  drivers/nvme/target/fc.o
   CC [M]  drivers/nvme/target/fcloop.o
   CC [M]  drivers/nvme/target/tcp.o
   CC [M]  drivers/nvme/host/core.o
   LD [M]  drivers/nvme/target/nvme-loop.o
   LD [M]  drivers/nvme/target/nvme-fcloop.o
   LD [M]  drivers/nvme/target/nvmet.o
   LD [M]  drivers/nvme/target/nvmet-tcp.o
   LD [M]  drivers/nvme/target/nvmet-fc.o
   LD [M]  drivers/nvme/target/nvmet-rdma.o
   LD [M]  drivers/nvme/host/nvme-core.o
   MODPOST drivers/nvme/Module.symvers
   LD [M]  drivers/nvme/host/nvme-core.ko
   CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
   CC [M]  drivers/nvme/target/nvme-loop.mod.o
   CC [M]  drivers/nvme/target/nvmet-fc.mod.o
   CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
   CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
   CC [M]  drivers/nvme/target/nvmet.mod.o
   LD [M]  drivers/nvme/target/nvme-loop.ko
   LD [M]  drivers/nvme/target/nvme-fcloop.ko
   LD [M]  drivers/nvme/target/nvmet-rdma.ko
   LD [M]  drivers/nvme/target/nvmet-fc.ko
   LD [M]  drivers/nvme/target/nvmet-tcp.ko
   LD [M]  drivers/nvme/target/nvmet.ko
+ HOST=drivers/nvme/host
+ TARGET=drivers/nvme/target
++ uname -r
+ HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
++ uname -r
+ TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
+ cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko 
drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko 
drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko 
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
+ cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko 
drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko 
drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko 
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
+ ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ 
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
total 6.3M
-rw-r--r--. 1 root root 2.7M Apr  8 16:58 nvme-core.ko
-rw-r--r--. 1 root root 426K Apr  8 16:58 nvme-fabrics.ko
-rw-r--r--. 1 root root 925K Apr  8 16:58 nvme-fc.ko
-rw-r--r--. 1 root root 714K Apr  8 16:58 nvme.ko
-rw-r--r--. 1 root root 856K Apr  8 16:58 nvme-rdma.ko
-rw-r--r--. 1 root root 799K Apr  8 16:58 nvme-tcp.ko

/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
total 6.3M
-rw-r--r--. 1 root root 475K Apr  8 16:58 nvme-fcloop.ko
-rw-r--r--. 1 root root 419K Apr  8 16:58 nvme-loop.ko
-rw-r--r--. 1 root root 734K Apr  8 16:58 nvmet-fc.ko
-rw-r--r--. 1 root root 3.2M Apr  8 16:58 nvmet.ko
-rw-r--r--. 1 root root 822K Apr  8 16:58 nvmet-rdma.ko
-rw-r--r--. 1 root root 671K Apr  8 16:58 nvmet-tcp.ko
+ sync
+ sync
+ sync
+ modprobe nvme
+ echo 'Press enter to continue ...'
Press enter to continue ...
+ read next

++ NN=1
++ NQN=testnqn
++ let NR_DEVICES=NN+1
++ modprobe -r null_blk
++ modprobe -r nvme
++ modprobe null_blk nr_devices=0
++ modprobe nvme
++ modprobe nvme-fabrics
++ modprobe nvmet
++ modprobe nvme-loop
++ dmesg -c
++ sleep 2
++ tree /sys/kernel/config
/sys/kernel/config
├── nullb
│   └── features
└── nvmet
     ├── hosts
     ├── ports
     └── subsystems

5 directories, 1 file
++ sleep 1
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
+++ shuf -i 1-1 -n 1
++ for i in `shuf -i  1-$NN -n $NN`
++ mkdir config/nullb/nullb1
++ echo 1
++ echo 4096
++ echo 2048
++ echo 1
+++ cat config/nullb/nullb1/index
++ IDX=0
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
++ echo ' ####### /dev/nullb0'
  ####### /dev/nullb0
++ echo -n /dev/nullb0
++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
/dev/nullb0
++ echo 1
++ dmesg -c
[ 1740.356540] nvme nvme0: 48/0/0 default/read/poll queues
[ 1743.345785] nvmet: adding nsid 1 to subsystem testnqn
++ mkdir /sys/kernel/config/nvmet/ports/1/
++ echo -n loop
++ echo -n 1
++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn 
/sys/kernel/config/nvmet/ports/1/subsystems/
++ echo transport=loop,nqn=testnqn
++ sleep 1
++ mount
++ column -t
++ grep nvme
++ dmesg -c
[ 1743.353177] nvmet: creating nvm controller 1 for subsystem testnqn 
for NQN 
nqn.2014-08.org.nvmexpress:uuid:510d0435-0ad7-49d4-ae4a-f1c1552b0f0c.
[ 1743.353266] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / 
sc 0x2) MORE DNR
[ 1743.355380] nvme nvme1: creating 48 I/O queues.
[ 1743.359206] nvme nvme1: new ctrl: "testnqn"
Node SN Model Namespace Usage Format FW Rev
nvme1n1 8bdab4b79aca987d0eba Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
NVMe status: Access Denied: Access to the namespace and/or LBA range is 
denied due to lack of access rights(0x4286)
Node SN Model Namespace Usage Format FW Rev
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
[ 1744.370690] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 1000
[ 1744.370698] CPU: 41 PID: 389 Comm: kworker/41:1 Tainted: G 
OE     5.17.0-rc2nvme+ #68
[ 1744.370702] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1744.370704] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1744.370712] Call Trace:
[ 1744.370715]  <TASK>
[ 1744.370717]  dump_stack_lvl+0x48/0x5e
[ 1744.370724]  should_fail.cold+0x32/0x37
[ 1744.370729]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1744.370741]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1744.370745]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1744.370754]  process_one_work+0x1af/0x380
[ 1744.370758]  worker_thread+0x50/0x3a0
[ 1744.370761]  ? rescuer_thread+0x370/0x370
[ 1744.370763]  kthread+0xe7/0x110
[ 1744.370767]  ? kthread_complete_and_exit+0x20/0x20
[ 1744.370770]  ret_from_fork+0x22/0x30
[ 1744.370776]  </TASK>
[ 1744.370783] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
[ 1744.375045] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 999
[ 1744.375052] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
OE     5.17.0-rc2nvme+ #68
[ 1744.375056] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1744.375058] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1744.375066] Call Trace:
[ 1744.375070]  <TASK>
[ 1744.375072]  dump_stack_lvl+0x48/0x5e
[ 1744.375079]  should_fail.cold+0x32/0x37
[ 1744.375084]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1744.375096]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1744.375100]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1744.375108]  process_one_work+0x1af/0x380
[ 1744.375113]  worker_thread+0x50/0x3a0
[ 1744.375115]  ? rescuer_thread+0x370/0x370
[ 1744.375118]  kthread+0xe7/0x110
[ 1744.375121]  ? kthread_complete_and_exit+0x20/0x20
[ 1744.375124]  ret_from_fork+0x22/0x30
[ 1744.375130]  </TASK>
[ 1744.375148] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 1 controller(s)

real	0m0.370s
user	0m0.001s
sys	0m0.004s
+ rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
+ rmdir /sys/kernel/config/nvmet/ports/1
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn
+ rmdir config/nullb/nullb1
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
 From 2d552b2c756fce48f53d66c5c58ffb6b3e3cac6e Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <kch@nvidia.com>
Date: Fri, 8 Apr 2022 16:45:18 -0700
Subject: [PATCH] nvme-core: mark internal passthru req REQ_QUIET

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
  drivers/nvme/host/core.c | 6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index f204c6f78b5b..a1ea2f736d42 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
  {
  	blk_status_t status = nvme_error_status(nvme_req(req)->status);

-	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
+	if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
  		nvme_log_error(req);
  	nvme_end_req_zoned(req);
  	nvme_trace_bio_complete(req);
@@ -1086,9 +1086,11 @@ int __nvme_submit_sync_cmd(struct request_queue 
*q, struct nvme_command *cmd,
  	else
  		req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
  						qid ? qid - 1 : 0);
-
  	if (IS_ERR(req))
  		return PTR_ERR(req);
+
+	req->rq_flags |= RQF_QUIET;
+
  	nvme_init_request(req, cmd);

  	if (timeout)
-- 
2.29.0

Applying: nvme-core: mark internal passthru req REQ_QUIET

commit 3c7952e0f8fcc9affdfa0249ab8211c18a513338 (HEAD -> nvme-5.18)
Author: Chaitanya Kulkarni <kch@nvidia.com>
Date:   Fri Apr 8 16:45:18 2022 -0700

     nvme-core: mark internal passthru req REQ_QUIET

     Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
+ umount /mnt/nvme0n1
+ clear_dmesg
./compile_nvme.sh: line 3: clear_dmesg: command not found
umount: /mnt/nvme0n1: no mount point specified.
+ ./delete.sh
+ NQN=testnqn
+ nvme disconnect -n testnqn
Failed to scan topoplogy: No such file or directory

real	0m0.002s
user	0m0.002s
sys	0m0.000s
+ rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
+ rmdir /sys/kernel/config/nvmet/ports/1
rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file 
or directory
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
./delete.sh: line 14: 
/sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file 
or directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
rmdir: failed to remove 
'/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or 
directory
+ rmdir '/sys/kernel/config/nvmet/subsystems/*'
rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such 
file or directory
+ rmdir 'config/nullb/nullb*'
rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
+ modprobe -r nvme-fabrics
+ modprobe -r nvme_loop
+ modprobe -r nvmet
+ modprobe -r nvme
+ sleep 1
+ modprobe -r nvme-core
+ lsmod
+ grep nvme
+ sleep 1
+ git diff
+ sleep 1
++ nproc
+ make -j 48 M=drivers/nvme/target/ clean
++ nproc
+ make -j 48 M=drivers/nvme/ modules
   CC [M]  drivers/nvme/target/core.o
   CC [M]  drivers/nvme/target/configfs.o
   CC [M]  drivers/nvme/target/admin-cmd.o
   CC [M]  drivers/nvme/target/fabrics-cmd.o
   CC [M]  drivers/nvme/target/discovery.o
   CC [M]  drivers/nvme/target/io-cmd-file.o
   CC [M]  drivers/nvme/target/io-cmd-bdev.o
   CC [M]  drivers/nvme/target/passthru.o
   CC [M]  drivers/nvme/target/zns.o
   CC [M]  drivers/nvme/target/trace.o
   CC [M]  drivers/nvme/target/loop.o
   CC [M]  drivers/nvme/target/rdma.o
   CC [M]  drivers/nvme/target/fc.o
   CC [M]  drivers/nvme/target/fcloop.o
   CC [M]  drivers/nvme/target/tcp.o
   CC [M]  drivers/nvme/host/core.o
   LD [M]  drivers/nvme/target/nvme-loop.o
   LD [M]  drivers/nvme/target/nvme-fcloop.o
   LD [M]  drivers/nvme/target/nvmet.o
   LD [M]  drivers/nvme/target/nvmet-fc.o
   LD [M]  drivers/nvme/target/nvmet-tcp.o
   LD [M]  drivers/nvme/target/nvmet-rdma.o
   LD [M]  drivers/nvme/host/nvme-core.o
   MODPOST drivers/nvme/Module.symvers
   CC [M]  drivers/nvme/host/nvme-core.mod.o
   CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
   CC [M]  drivers/nvme/target/nvme-loop.mod.o
   CC [M]  drivers/nvme/target/nvmet-fc.mod.o
   CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
   CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
   CC [M]  drivers/nvme/target/nvmet.mod.o
   LD [M]  drivers/nvme/host/nvme-core.ko
   LD [M]  drivers/nvme/target/nvmet-fc.ko
   LD [M]  drivers/nvme/target/nvmet-rdma.ko
   LD [M]  drivers/nvme/target/nvme-fcloop.ko
   LD [M]  drivers/nvme/target/nvmet.ko
   LD [M]  drivers/nvme/target/nvme-loop.ko
   LD [M]  drivers/nvme/target/nvmet-tcp.ko
+ HOST=drivers/nvme/host
+ TARGET=drivers/nvme/target
++ uname -r
+ HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
++ uname -r
+ TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
+ cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko 
drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko 
drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko 
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
+ cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko 
drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko 
drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko 
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
+ ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ 
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
total 6.3M
-rw-r--r--. 1 root root 2.7M Apr  8 16:59 nvme-core.ko
-rw-r--r--. 1 root root 426K Apr  8 16:59 nvme-fabrics.ko
-rw-r--r--. 1 root root 925K Apr  8 16:59 nvme-fc.ko
-rw-r--r--. 1 root root 714K Apr  8 16:59 nvme.ko
-rw-r--r--. 1 root root 856K Apr  8 16:59 nvme-rdma.ko
-rw-r--r--. 1 root root 799K Apr  8 16:59 nvme-tcp.ko

/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
total 6.3M
-rw-r--r--. 1 root root 475K Apr  8 16:59 nvme-fcloop.ko
-rw-r--r--. 1 root root 419K Apr  8 16:59 nvme-loop.ko
-rw-r--r--. 1 root root 734K Apr  8 16:59 nvmet-fc.ko
-rw-r--r--. 1 root root 3.2M Apr  8 16:59 nvmet.ko
-rw-r--r--. 1 root root 822K Apr  8 16:59 nvmet-rdma.ko
-rw-r--r--. 1 root root 671K Apr  8 16:59 nvmet-tcp.ko
+ sync
+ sync
+ sync
+ modprobe nvme
+ echo 'Press enter to continue ...'
Press enter to continue ...
+ read next

++ NN=1
++ NQN=testnqn
++ let NR_DEVICES=NN+1
++ modprobe -r null_blk
++ modprobe -r nvme
++ modprobe null_blk nr_devices=0
++ modprobe nvme
++ modprobe nvme-fabrics
++ modprobe nvmet
++ modprobe nvme-loop
++ dmesg -c
++ sleep 2
++ tree /sys/kernel/config
/sys/kernel/config
├── nullb
│   └── features
└── nvmet
     ├── hosts
     ├── ports
     └── subsystems

5 directories, 1 file
++ sleep 1
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
+++ shuf -i 1-1 -n 1
++ for i in `shuf -i  1-$NN -n $NN`
++ mkdir config/nullb/nullb1
++ echo 1
++ echo 4096
++ echo 2048
++ echo 1
+++ cat config/nullb/nullb1/index
++ IDX=0
++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
++ echo ' ####### /dev/nullb0'
  ####### /dev/nullb0
++ echo -n /dev/nullb0
++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
/dev/nullb0
++ echo 1
++ dmesg -c
[ 1761.160765] nvme nvme0: 48/0/0 default/read/poll queues
[ 1764.151446] nvmet: adding nsid 1 to subsystem testnqn
++ mkdir /sys/kernel/config/nvmet/ports/1/
++ echo -n loop
++ echo -n 1
++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn 
/sys/kernel/config/nvmet/ports/1/subsystems/
++ echo transport=loop,nqn=testnqn
++ sleep 1
++ mount
++ column -t
++ grep nvme
++ dmesg -c
[ 1764.158721] nvmet: creating nvm controller 1 for subsystem testnqn 
for NQN 
nqn.2014-08.org.nvmexpress:uuid:33bca6cd-e82c-4de0-bba2-f70070f69097.
[ 1764.158827] nvme nvme1: creating 48 I/O queues.
[ 1764.162745] nvme nvme1: new ctrl: "testnqn"
Node SN Model Namespace Usage Format FW Rev
nvme1n1 bc09a2ee2829a09471a3 Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
NVMe status: Access Denied: Access to the namespace and/or LBA range is 
denied due to lack of access rights(0x4286)
Node SN Model Namespace Usage Format FW Rev
nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
[ 1765.175570] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 1000
[ 1765.175579] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
OE     5.17.0-rc2nvme+ #68
[ 1765.175583] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1765.175585] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1765.175593] Call Trace:
[ 1765.175596]  <TASK>
[ 1765.175599]  dump_stack_lvl+0x48/0x5e
[ 1765.175605]  should_fail.cold+0x32/0x37
[ 1765.175610]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1765.175623]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1765.175627]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1765.175636]  process_one_work+0x1af/0x380
[ 1765.175640]  worker_thread+0x50/0x3a0
[ 1765.175643]  ? rescuer_thread+0x370/0x370
[ 1765.175645]  kthread+0xe7/0x110
[ 1765.175648]  ? kthread_complete_and_exit+0x20/0x20
[ 1765.175652]  ret_from_fork+0x22/0x30
[ 1765.175658]  </TASK>
[ 1765.175664] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
[ 1765.179829] FAULT_INJECTION: forcing a failure.
                name fault_inject, interval 1, probability 100, space 0, 
times 999
[ 1765.179835] CPU: 44 PID: 9897 Comm: kworker/44:0 Tainted: G 
  OE     5.17.0-rc2nvme+ #68
[ 1765.179839] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 1765.179841] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
[ 1765.179850] Call Trace:
[ 1765.179853]  <TASK>
[ 1765.179855]  dump_stack_lvl+0x48/0x5e
[ 1765.179862]  should_fail.cold+0x32/0x37
[ 1765.179867]  nvme_should_fail+0x38/0x90 [nvme_core]
[ 1765.179879]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
[ 1765.179884]  nvmet_req_complete+0x11/0x50 [nvmet]
[ 1765.179892]  process_one_work+0x1af/0x380
[ 1765.179897]  ? rescuer_thread+0x370/0x370
[ 1765.179899]  worker_thread+0x50/0x3a0
[ 1765.179902]  ? rescuer_thread+0x370/0x370
[ 1765.179903]  kthread+0xe7/0x110
[ 1765.179907]  ? kthread_complete_and_exit+0x20/0x20
[ 1765.179911]  ret_from_fork+0x22/0x30
[ 1765.179917]  </TASK>
[ 1765.179923] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
+ NQN=testnqn
+ nvme disconnect -n testnqn
NQN:testnqn disconnected 1 controller(s)

real	0m0.347s
user	0m0.002s
sys	0m0.004s
+ rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
+ rmdir /sys/kernel/config/nvmet/ports/1
+ for subsys in /sys/kernel/config/nvmet/subsystems/*
+ for ns in ${subsys}/namespaces/*
+ echo 0
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
+ rmdir /sys/kernel/config/nvmet/subsystems/testnqn
+ rmdir config/nullb/nullb1
+ umount /mnt/nvme0n1
umount: /mnt/nvme0n1: no mount point specified.
+ umount /mnt/backend
umount: /mnt/backend: not mounted.
+ modprobe -r nvme_loop
+ modprobe -r nvme_fabrics
+ modprobe -r nvmet
+ modprobe -r nvme
+ modprobe -r null_blk
+ tree /sys/kernel/config
/sys/kernel/config

0 directories, 0 files
HEAD is now at 7bec02cef3d1 Merge branch 'nvme-5.18' of 
git://git.infradead.org/nvme into nvme-5.18


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-09  0:10                         ` Chaitanya Kulkarni
@ 2022-04-11 18:31                           ` Alan Adamson
  2022-04-11 19:53                             ` Chaitanya Kulkarni
  0 siblings, 1 reply; 23+ messages in thread
From: Alan Adamson @ 2022-04-11 18:31 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme



> On Apr 8, 2022, at 5:10 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
> 
> 
>>> Can you please share a command line for "nvme admin-passthru"
>>> where this patch supresses messages ?
>> 
>> I have the NVME Fault Injector configured:
>> 
>> echo 0x286 > /sys/kernel/debug/${ctrl_dev}/fault_inject/status
>> echo 1000 > /sys/kernel/debug/${ctrl_dev}/fault_inject/times
>> echo 100 > /sys/kernel/debug/${ctrl_dev}/fault_inject/probability
>> 
>> nvme admin-passthru /dev/${ctrl_dev} --opcode=06 --data-len=4096 --cdw10=1 -r
>> 
>> echo 0 >  /sys/kernel/debug/${ctrl_dev}/fault_inject/probability
>> 
> 
> I was able to produce same admin-passthru error messages with my patch.
> See below detailed execution with the script and the log, can you please
> tell me what is missing ?

I reapplied your patch and reran my test and all looks good.  I must of applied the
incorrect patch last week.

Alan



> 
>>> -ck
>>> 
>>> 
>> 
> 
> * Without this patch I get 3 Error message from fault injection
> enabled :-
> 1. [ 1743.353266] nvme1: Identify(0x6), Invalid Field in
>    Command (sct 0x0 / sc 0x2) MORE DNR
> 
> 2. [ 1744.370690] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1744.370698] CPU: 41 PID: 389 Comm: kworker/41:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.370702] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.370704] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.370712] Call Trace:
> [ 1744.370715]  <TASK>
> [ 1744.370717]  dump_stack_lvl+0x48/0x5e
> [ 1744.370724]  should_fail.cold+0x32/0x37
> [ 1744.370729]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.370741]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.370745]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.370754]  process_one_work+0x1af/0x380
> [ 1744.370758]  worker_thread+0x50/0x3a0
> [ 1744.370761]  ? rescuer_thread+0x370/0x370
> [ 1744.370763]  kthread+0xe7/0x110
> [ 1744.370767]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.370770]  ret_from_fork+0x22/0x30
> [ 1744.370776]  </TASK>
> [ 1744.370783] nvme1: Identify(0x6), Access Denied
>                (sct 0x2 / sc 0x86) DNR
> 
> 3. [1744.375045] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1744.375052] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.375056] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.375058] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.375066] Call Trace:
> [ 1744.375070]  <TASK>
> [ 1744.375072]  dump_stack_lvl+0x48/0x5e
> [ 1744.375079]  should_fail.cold+0x32/0x37
> [ 1744.375084]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.375096]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.375100]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.375108]  process_one_work+0x1af/0x380
> [ 1744.375113]  worker_thread+0x50/0x3a0
> [ 1744.375115]  ? rescuer_thread+0x370/0x370
> [ 1744.375118]  kthread+0xe7/0x110
> [ 1744.375121]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.375124]  ret_from_fork+0x22/0x30
> [ 1744.375130]  </TASK>
> [ 1744.375148] nvme1: Identify(0x6), Access Denied
>                (sct 0x2 / sc 0x86) DNR
> 
> 
> * With this patch I only get two from fault injection which are coming
> from nvme-admin-passthru and no from the internal passthru:-
> 
> 1. [ 1765.175570] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1765.175579] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1765.175583] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.175585] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.175593] Call Trace:
> [ 1765.175596]  <TASK>
> [ 1765.175599]  dump_stack_lvl+0x48/0x5e
> [ 1765.175605]  should_fail.cold+0x32/0x37
> [ 1765.175610]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.175623]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.175627]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.175636]  process_one_work+0x1af/0x380
> [ 1765.175640]  worker_thread+0x50/0x3a0
> [ 1765.175643]  ? rescuer_thread+0x370/0x370
> [ 1765.175645]  kthread+0xe7/0x110
> [ 1765.175648]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.175652]  ret_from_fork+0x22/0x30
> [ 1765.175658]  </TASK>
> [ 1765.175664] nvme1: Identify(0x6), Access Denied
>                (sct 0x2 / sc 0x86) DNR
> 2. [ 1765.179829] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1765.179835] CPU: 44 PID: 9897 Comm: kworker/44:0 Tainted: G 
>  OE     5.17.0-rc2nvme+ #68
> [ 1765.179839] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.179841] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.179850] Call Trace:
> [ 1765.179853]  <TASK>
> [ 1765.179855]  dump_stack_lvl+0x48/0x5e
> [ 1765.179862]  should_fail.cold+0x32/0x37
> [ 1765.179867]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.179879]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.179884]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.179892]  process_one_work+0x1af/0x380
> [ 1765.179897]  ? rescuer_thread+0x370/0x370
> [ 1765.179899]  worker_thread+0x50/0x3a0
> [ 1765.179902]  ? rescuer_thread+0x370/0x370
> [ 1765.179903]  kthread+0xe7/0x110
> [ 1765.179907]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.179911]  ret_from_fork+0x22/0x30
> [ 1765.179917]  </TASK>
> [ 1765.179923] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> 
> 
> * Detailed test log with nvmeof nvme-loop controller configured
> and fault injection enabled with and without this patch to mask
> internal passthru. It shows only internal passthru error message
> is masked with this patch and internal passthru error message
> is present without this patch, nvme-admin-passthru error message
> present in both cases :-
> 
> nvme (nvme-5.18) # sh error_inject.sh
> commit 7bec02cef3d11f3d3a80bfa8739f790377bac8d6 (HEAD -> nvme-5.18)
> Merge: 226d991feef9 a4a6f3c8f61c
> Author: Chaitanya Kulkarni <kch@nvidia.com>
> Date:   Fri Apr 8 16:57:45 2022 -0700
> 
>     Merge branch 'nvme-5.18' of git://git.infradead.org/nvme into nvme-5.18
> + umount /mnt/nvme0n1
> + clear_dmesg
> ./compile_nvme.sh: line 3: clear_dmesg: command not found
> umount: /mnt/nvme0n1: no mount point specified.
> + ./delete.sh
> + NQN=testnqn
> + nvme disconnect -n testnqn
> Failed to scan topoplogy: No such file or directory
> 
> real	0m0.002s
> user	0m0.000s
> sys	0m0.002s
> + rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
> + rmdir /sys/kernel/config/nvmet/ports/1
> rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file 
> or directory
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> ./delete.sh: line 14: 
> /sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file 
> or directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
> rmdir: failed to remove 
> '/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or 
> directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*'
> rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such 
> file or directory
> + rmdir 'config/nullb/nullb*'
> rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> + modprobe -r nvme-fabrics
> + modprobe -r nvme_loop
> + modprobe -r nvmet
> + modprobe -r nvme
> + sleep 1
> + modprobe -r nvme-core
> + lsmod
> + grep nvme
> + sleep 1
> + git diff
> + sleep 1
> ++ nproc
> + make -j 48 M=drivers/nvme/target/ clean
> ++ nproc
> + make -j 48 M=drivers/nvme/ modules
>   CC [M]  drivers/nvme/target/core.o
>   CC [M]  drivers/nvme/target/configfs.o
>   CC [M]  drivers/nvme/target/admin-cmd.o
>   CC [M]  drivers/nvme/target/fabrics-cmd.o
>   CC [M]  drivers/nvme/target/discovery.o
>   CC [M]  drivers/nvme/target/io-cmd-file.o
>   CC [M]  drivers/nvme/target/io-cmd-bdev.o
>   CC [M]  drivers/nvme/target/passthru.o
>   CC [M]  drivers/nvme/target/zns.o
>   CC [M]  drivers/nvme/target/trace.o
>   CC [M]  drivers/nvme/target/loop.o
>   CC [M]  drivers/nvme/target/rdma.o
>   CC [M]  drivers/nvme/target/fc.o
>   CC [M]  drivers/nvme/target/fcloop.o
>   CC [M]  drivers/nvme/target/tcp.o
>   CC [M]  drivers/nvme/host/core.o
>   LD [M]  drivers/nvme/target/nvme-loop.o
>   LD [M]  drivers/nvme/target/nvme-fcloop.o
>   LD [M]  drivers/nvme/target/nvmet.o
>   LD [M]  drivers/nvme/target/nvmet-tcp.o
>   LD [M]  drivers/nvme/target/nvmet-fc.o
>   LD [M]  drivers/nvme/target/nvmet-rdma.o
>   LD [M]  drivers/nvme/host/nvme-core.o
>   MODPOST drivers/nvme/Module.symvers
>   LD [M]  drivers/nvme/host/nvme-core.ko
>   CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
>   CC [M]  drivers/nvme/target/nvme-loop.mod.o
>   CC [M]  drivers/nvme/target/nvmet-fc.mod.o
>   CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
>   CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
>   CC [M]  drivers/nvme/target/nvmet.mod.o
>   LD [M]  drivers/nvme/target/nvme-loop.ko
>   LD [M]  drivers/nvme/target/nvme-fcloop.ko
>   LD [M]  drivers/nvme/target/nvmet-rdma.ko
>   LD [M]  drivers/nvme/target/nvmet-fc.ko
>   LD [M]  drivers/nvme/target/nvmet-tcp.ko
>   LD [M]  drivers/nvme/target/nvmet.ko
> + HOST=drivers/nvme/host
> + TARGET=drivers/nvme/target
> ++ uname -r
> + HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
> ++ uname -r
> + TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
> + cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko 
> drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko 
> drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
> + cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko 
> drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko 
> drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> + ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
> total 6.3M
> -rw-r--r--. 1 root root 2.7M Apr  8 16:58 nvme-core.ko
> -rw-r--r--. 1 root root 426K Apr  8 16:58 nvme-fabrics.ko
> -rw-r--r--. 1 root root 925K Apr  8 16:58 nvme-fc.ko
> -rw-r--r--. 1 root root 714K Apr  8 16:58 nvme.ko
> -rw-r--r--. 1 root root 856K Apr  8 16:58 nvme-rdma.ko
> -rw-r--r--. 1 root root 799K Apr  8 16:58 nvme-tcp.ko
> 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
> total 6.3M
> -rw-r--r--. 1 root root 475K Apr  8 16:58 nvme-fcloop.ko
> -rw-r--r--. 1 root root 419K Apr  8 16:58 nvme-loop.ko
> -rw-r--r--. 1 root root 734K Apr  8 16:58 nvmet-fc.ko
> -rw-r--r--. 1 root root 3.2M Apr  8 16:58 nvmet.ko
> -rw-r--r--. 1 root root 822K Apr  8 16:58 nvmet-rdma.ko
> -rw-r--r--. 1 root root 671K Apr  8 16:58 nvmet-tcp.ko
> + sync
> + sync
> + sync
> + modprobe nvme
> + echo 'Press enter to continue ...'
> Press enter to continue ...
> + read next
> 
> ++ NN=1
> ++ NQN=testnqn
> ++ let NR_DEVICES=NN+1
> ++ modprobe -r null_blk
> ++ modprobe -r nvme
> ++ modprobe null_blk nr_devices=0
> ++ modprobe nvme
> ++ modprobe nvme-fabrics
> ++ modprobe nvmet
> ++ modprobe nvme-loop
> ++ dmesg -c
> ++ sleep 2
> ++ tree /sys/kernel/config
> /sys/kernel/config
> ├── nullb
> │   └── features
> └── nvmet
>     ├── hosts
>     ├── ports
>     └── subsystems
> 
> 5 directories, 1 file
> ++ sleep 1
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
> +++ shuf -i 1-1 -n 1
> ++ for i in `shuf -i  1-$NN -n $NN`
> ++ mkdir config/nullb/nullb1
> ++ echo 1
> ++ echo 4096
> ++ echo 2048
> ++ echo 1
> +++ cat config/nullb/nullb1/index
> ++ IDX=0
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> ++ echo ' ####### /dev/nullb0'
>  ####### /dev/nullb0
> ++ echo -n /dev/nullb0
> ++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
> /dev/nullb0
> ++ echo 1
> ++ dmesg -c
> [ 1740.356540] nvme nvme0: 48/0/0 default/read/poll queues
> [ 1743.345785] nvmet: adding nsid 1 to subsystem testnqn
> ++ mkdir /sys/kernel/config/nvmet/ports/1/
> ++ echo -n loop
> ++ echo -n 1
> ++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn 
> /sys/kernel/config/nvmet/ports/1/subsystems/
> ++ echo transport=loop,nqn=testnqn
> ++ sleep 1
> ++ mount
> ++ column -t
> ++ grep nvme
> ++ dmesg -c
> [ 1743.353177] nvmet: creating nvm controller 1 for subsystem testnqn 
> for NQN 
> nqn.2014-08.org.nvmexpress:uuid:510d0435-0ad7-49d4-ae4a-f1c1552b0f0c.
> [ 1743.353266] nvme1: Identify(0x6), Invalid Field in Command (sct 0x0 / 
> sc 0x2) MORE DNR
> [ 1743.355380] nvme nvme1: creating 48 I/O queues.
> [ 1743.359206] nvme nvme1: new ctrl: "testnqn"
> Node SN Model Namespace Usage Format FW Rev
> nvme1n1 8bdab4b79aca987d0eba Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> NVMe status: Access Denied: Access to the namespace and/or LBA range is 
> denied due to lack of access rights(0x4286)
> Node SN Model Namespace Usage Format FW Rev
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> [ 1744.370690] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1744.370698] CPU: 41 PID: 389 Comm: kworker/41:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.370702] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.370704] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.370712] Call Trace:
> [ 1744.370715]  <TASK>
> [ 1744.370717]  dump_stack_lvl+0x48/0x5e
> [ 1744.370724]  should_fail.cold+0x32/0x37
> [ 1744.370729]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.370741]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.370745]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.370754]  process_one_work+0x1af/0x380
> [ 1744.370758]  worker_thread+0x50/0x3a0
> [ 1744.370761]  ? rescuer_thread+0x370/0x370
> [ 1744.370763]  kthread+0xe7/0x110
> [ 1744.370767]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.370770]  ret_from_fork+0x22/0x30
> [ 1744.370776]  </TASK>
> [ 1744.370783] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> [ 1744.375045] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1744.375052] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1744.375056] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1744.375058] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1744.375066] Call Trace:
> [ 1744.375070]  <TASK>
> [ 1744.375072]  dump_stack_lvl+0x48/0x5e
> [ 1744.375079]  should_fail.cold+0x32/0x37
> [ 1744.375084]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1744.375096]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1744.375100]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1744.375108]  process_one_work+0x1af/0x380
> [ 1744.375113]  worker_thread+0x50/0x3a0
> [ 1744.375115]  ? rescuer_thread+0x370/0x370
> [ 1744.375118]  kthread+0xe7/0x110
> [ 1744.375121]  ? kthread_complete_and_exit+0x20/0x20
> [ 1744.375124]  ret_from_fork+0x22/0x30
> [ 1744.375130]  </TASK>
> [ 1744.375148] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> + NQN=testnqn
> + nvme disconnect -n testnqn
> NQN:testnqn disconnected 1 controller(s)
> 
> real	0m0.370s
> user	0m0.001s
> sys	0m0.004s
> + rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
> + rmdir /sys/kernel/config/nvmet/ports/1
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn
> + rmdir config/nullb/nullb1
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> From 2d552b2c756fce48f53d66c5c58ffb6b3e3cac6e Mon Sep 17 00:00:00 2001
> From: Chaitanya Kulkarni <kch@nvidia.com>
> Date: Fri, 8 Apr 2022 16:45:18 -0700
> Subject: [PATCH] nvme-core: mark internal passthru req REQ_QUIET
> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> ---
>  drivers/nvme/host/core.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index f204c6f78b5b..a1ea2f736d42 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -370,7 +370,7 @@ static inline void nvme_end_req(struct request *req)
>  {
>  	blk_status_t status = nvme_error_status(nvme_req(req)->status);
> 
> -	if (unlikely(nvme_req(req)->status != NVME_SC_SUCCESS))
> +	if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET)))
>  		nvme_log_error(req);
>  	nvme_end_req_zoned(req);
>  	nvme_trace_bio_complete(req);
> @@ -1086,9 +1086,11 @@ int __nvme_submit_sync_cmd(struct request_queue 
> *q, struct nvme_command *cmd,
>  	else
>  		req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
>  						qid ? qid - 1 : 0);
> -
>  	if (IS_ERR(req))
>  		return PTR_ERR(req);
> +
> +	req->rq_flags |= RQF_QUIET;
> +
>  	nvme_init_request(req, cmd);
> 
>  	if (timeout)
> -- 
> 2.29.0
> 
> Applying: nvme-core: mark internal passthru req REQ_QUIET
> 
> commit 3c7952e0f8fcc9affdfa0249ab8211c18a513338 (HEAD -> nvme-5.18)
> Author: Chaitanya Kulkarni <kch@nvidia.com>
> Date:   Fri Apr 8 16:45:18 2022 -0700
> 
>     nvme-core: mark internal passthru req REQ_QUIET
> 
>     Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> + umount /mnt/nvme0n1
> + clear_dmesg
> ./compile_nvme.sh: line 3: clear_dmesg: command not found
> umount: /mnt/nvme0n1: no mount point specified.
> + ./delete.sh
> + NQN=testnqn
> + nvme disconnect -n testnqn
> Failed to scan topoplogy: No such file or directory
> 
> real	0m0.002s
> user	0m0.002s
> sys	0m0.000s
> + rm -fr '/sys/kernel/config/nvmet/ports/1/subsystems/*'
> + rmdir /sys/kernel/config/nvmet/ports/1
> rmdir: failed to remove '/sys/kernel/config/nvmet/ports/1': No such file 
> or directory
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> ./delete.sh: line 14: 
> /sys/kernel/config/nvmet/subsystems/*/namespaces/*/enable: No such file 
> or directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*/namespaces/*'
> rmdir: failed to remove 
> '/sys/kernel/config/nvmet/subsystems/*/namespaces/*': No such file or 
> directory
> + rmdir '/sys/kernel/config/nvmet/subsystems/*'
> rmdir: failed to remove '/sys/kernel/config/nvmet/subsystems/*': No such 
> file or directory
> + rmdir 'config/nullb/nullb*'
> rmdir: failed to remove 'config/nullb/nullb*': No such file or directory
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> + modprobe -r nvme-fabrics
> + modprobe -r nvme_loop
> + modprobe -r nvmet
> + modprobe -r nvme
> + sleep 1
> + modprobe -r nvme-core
> + lsmod
> + grep nvme
> + sleep 1
> + git diff
> + sleep 1
> ++ nproc
> + make -j 48 M=drivers/nvme/target/ clean
> ++ nproc
> + make -j 48 M=drivers/nvme/ modules
>   CC [M]  drivers/nvme/target/core.o
>   CC [M]  drivers/nvme/target/configfs.o
>   CC [M]  drivers/nvme/target/admin-cmd.o
>   CC [M]  drivers/nvme/target/fabrics-cmd.o
>   CC [M]  drivers/nvme/target/discovery.o
>   CC [M]  drivers/nvme/target/io-cmd-file.o
>   CC [M]  drivers/nvme/target/io-cmd-bdev.o
>   CC [M]  drivers/nvme/target/passthru.o
>   CC [M]  drivers/nvme/target/zns.o
>   CC [M]  drivers/nvme/target/trace.o
>   CC [M]  drivers/nvme/target/loop.o
>   CC [M]  drivers/nvme/target/rdma.o
>   CC [M]  drivers/nvme/target/fc.o
>   CC [M]  drivers/nvme/target/fcloop.o
>   CC [M]  drivers/nvme/target/tcp.o
>   CC [M]  drivers/nvme/host/core.o
>   LD [M]  drivers/nvme/target/nvme-loop.o
>   LD [M]  drivers/nvme/target/nvme-fcloop.o
>   LD [M]  drivers/nvme/target/nvmet.o
>   LD [M]  drivers/nvme/target/nvmet-fc.o
>   LD [M]  drivers/nvme/target/nvmet-tcp.o
>   LD [M]  drivers/nvme/target/nvmet-rdma.o
>   LD [M]  drivers/nvme/host/nvme-core.o
>   MODPOST drivers/nvme/Module.symvers
>   CC [M]  drivers/nvme/host/nvme-core.mod.o
>   CC [M]  drivers/nvme/target/nvme-fcloop.mod.o
>   CC [M]  drivers/nvme/target/nvme-loop.mod.o
>   CC [M]  drivers/nvme/target/nvmet-fc.mod.o
>   CC [M]  drivers/nvme/target/nvmet-rdma.mod.o
>   CC [M]  drivers/nvme/target/nvmet-tcp.mod.o
>   CC [M]  drivers/nvme/target/nvmet.mod.o
>   LD [M]  drivers/nvme/host/nvme-core.ko
>   LD [M]  drivers/nvme/target/nvmet-fc.ko
>   LD [M]  drivers/nvme/target/nvmet-rdma.ko
>   LD [M]  drivers/nvme/target/nvme-fcloop.ko
>   LD [M]  drivers/nvme/target/nvmet.ko
>   LD [M]  drivers/nvme/target/nvme-loop.ko
>   LD [M]  drivers/nvme/target/nvmet-tcp.ko
> + HOST=drivers/nvme/host
> + TARGET=drivers/nvme/target
> ++ uname -r
> + HOST_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/
> ++ uname -r
> + TARGET_DEST=/lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target/
> + cp drivers/nvme/host/nvme-core.ko drivers/nvme/host/nvme-fabrics.ko 
> drivers/nvme/host/nvme-fc.ko drivers/nvme/host/nvme.ko 
> drivers/nvme/host/nvme-rdma.ko drivers/nvme/host/nvme-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host//
> + cp drivers/nvme/target/nvme-fcloop.ko drivers/nvme/target/nvme-loop.ko 
> drivers/nvme/target/nvmet-fc.ko drivers/nvme/target/nvmet.ko 
> drivers/nvme/target/nvmet-rdma.ko drivers/nvme/target/nvmet-tcp.ko 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> + ls -lrth /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/ 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/host/:
> total 6.3M
> -rw-r--r--. 1 root root 2.7M Apr  8 16:59 nvme-core.ko
> -rw-r--r--. 1 root root 426K Apr  8 16:59 nvme-fabrics.ko
> -rw-r--r--. 1 root root 925K Apr  8 16:59 nvme-fc.ko
> -rw-r--r--. 1 root root 714K Apr  8 16:59 nvme.ko
> -rw-r--r--. 1 root root 856K Apr  8 16:59 nvme-rdma.ko
> -rw-r--r--. 1 root root 799K Apr  8 16:59 nvme-tcp.ko
> 
> /lib/modules/5.17.0-rc2nvme+/kernel/drivers/nvme/target//:
> total 6.3M
> -rw-r--r--. 1 root root 475K Apr  8 16:59 nvme-fcloop.ko
> -rw-r--r--. 1 root root 419K Apr  8 16:59 nvme-loop.ko
> -rw-r--r--. 1 root root 734K Apr  8 16:59 nvmet-fc.ko
> -rw-r--r--. 1 root root 3.2M Apr  8 16:59 nvmet.ko
> -rw-r--r--. 1 root root 822K Apr  8 16:59 nvmet-rdma.ko
> -rw-r--r--. 1 root root 671K Apr  8 16:59 nvmet-tcp.ko
> + sync
> + sync
> + sync
> + modprobe nvme
> + echo 'Press enter to continue ...'
> Press enter to continue ...
> + read next
> 
> ++ NN=1
> ++ NQN=testnqn
> ++ let NR_DEVICES=NN+1
> ++ modprobe -r null_blk
> ++ modprobe -r nvme
> ++ modprobe null_blk nr_devices=0
> ++ modprobe nvme
> ++ modprobe nvme-fabrics
> ++ modprobe nvmet
> ++ modprobe nvme-loop
> ++ dmesg -c
> ++ sleep 2
> ++ tree /sys/kernel/config
> /sys/kernel/config
> ├── nullb
> │   └── features
> └── nvmet
>     ├── hosts
>     ├── ports
>     └── subsystems
> 
> 5 directories, 1 file
> ++ sleep 1
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn
> +++ shuf -i 1-1 -n 1
> ++ for i in `shuf -i  1-$NN -n $NN`
> ++ mkdir config/nullb/nullb1
> ++ echo 1
> ++ echo 4096
> ++ echo 2048
> ++ echo 1
> +++ cat config/nullb/nullb1/index
> ++ IDX=0
> ++ mkdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> ++ echo ' ####### /dev/nullb0'
>  ####### /dev/nullb0
> ++ echo -n /dev/nullb0
> ++ cat /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1/device_path
> /dev/nullb0
> ++ echo 1
> ++ dmesg -c
> [ 1761.160765] nvme nvme0: 48/0/0 default/read/poll queues
> [ 1764.151446] nvmet: adding nsid 1 to subsystem testnqn
> ++ mkdir /sys/kernel/config/nvmet/ports/1/
> ++ echo -n loop
> ++ echo -n 1
> ++ ln -s /sys/kernel/config/nvmet/subsystems/testnqn 
> /sys/kernel/config/nvmet/ports/1/subsystems/
> ++ echo transport=loop,nqn=testnqn
> ++ sleep 1
> ++ mount
> ++ column -t
> ++ grep nvme
> ++ dmesg -c
> [ 1764.158721] nvmet: creating nvm controller 1 for subsystem testnqn 
> for NQN 
> nqn.2014-08.org.nvmexpress:uuid:33bca6cd-e82c-4de0-bba2-f70070f69097.
> [ 1764.158827] nvme nvme1: creating 48 I/O queues.
> [ 1764.162745] nvme nvme1: new ctrl: "testnqn"
> Node SN Model Namespace Usage Format FW Rev
> nvme1n1 bc09a2ee2829a09471a3 Linux 1 2.15 GB / 2.15 GB 4 KiB + 0 B 5.17.0-r
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> NVMe status: Access Denied: Access to the namespace and/or LBA range is 
> denied due to lack of access rights(0x4286)
> Node SN Model Namespace Usage Format FW Rev
> nvme0n1 foo QEMU NVMe Ctrl 1 1.07 GB / 1.07 GB 512 B + 0 B 1.0
> [ 1765.175570] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 1000
> [ 1765.175579] CPU: 42 PID: 391 Comm: kworker/42:1 Tainted: G 
> OE     5.17.0-rc2nvme+ #68
> [ 1765.175583] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.175585] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.175593] Call Trace:
> [ 1765.175596]  <TASK>
> [ 1765.175599]  dump_stack_lvl+0x48/0x5e
> [ 1765.175605]  should_fail.cold+0x32/0x37
> [ 1765.175610]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.175623]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.175627]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.175636]  process_one_work+0x1af/0x380
> [ 1765.175640]  worker_thread+0x50/0x3a0
> [ 1765.175643]  ? rescuer_thread+0x370/0x370
> [ 1765.175645]  kthread+0xe7/0x110
> [ 1765.175648]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.175652]  ret_from_fork+0x22/0x30
> [ 1765.175658]  </TASK>
> [ 1765.175664] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> [ 1765.179829] FAULT_INJECTION: forcing a failure.
>                name fault_inject, interval 1, probability 100, space 0, 
> times 999
> [ 1765.179835] CPU: 44 PID: 9897 Comm: kworker/44:0 Tainted: G 
>  OE     5.17.0-rc2nvme+ #68
> [ 1765.179839] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), 
> BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> [ 1765.179841] Workqueue: nvmet-wq nvme_loop_execute_work [nvme_loop]
> [ 1765.179850] Call Trace:
> [ 1765.179853]  <TASK>
> [ 1765.179855]  dump_stack_lvl+0x48/0x5e
> [ 1765.179862]  should_fail.cold+0x32/0x37
> [ 1765.179867]  nvme_should_fail+0x38/0x90 [nvme_core]
> [ 1765.179879]  nvme_loop_queue_response+0xc9/0x143 [nvme_loop]
> [ 1765.179884]  nvmet_req_complete+0x11/0x50 [nvmet]
> [ 1765.179892]  process_one_work+0x1af/0x380
> [ 1765.179897]  ? rescuer_thread+0x370/0x370
> [ 1765.179899]  worker_thread+0x50/0x3a0
> [ 1765.179902]  ? rescuer_thread+0x370/0x370
> [ 1765.179903]  kthread+0xe7/0x110
> [ 1765.179907]  ? kthread_complete_and_exit+0x20/0x20
> [ 1765.179911]  ret_from_fork+0x22/0x30
> [ 1765.179917]  </TASK>
> [ 1765.179923] nvme1: Identify(0x6), Access Denied (sct 0x2 / sc 0x86) DNR
> + NQN=testnqn
> + nvme disconnect -n testnqn
> NQN:testnqn disconnected 1 controller(s)
> 
> real	0m0.347s
> user	0m0.002s
> sys	0m0.004s
> + rm -fr /sys/kernel/config/nvmet/ports/1/subsystems/testnqn
> + rmdir /sys/kernel/config/nvmet/ports/1
> + for subsys in /sys/kernel/config/nvmet/subsystems/*
> + for ns in ${subsys}/namespaces/*
> + echo 0
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn/namespaces/1
> + rmdir /sys/kernel/config/nvmet/subsystems/testnqn
> + rmdir config/nullb/nullb1
> + umount /mnt/nvme0n1
> umount: /mnt/nvme0n1: no mount point specified.
> + umount /mnt/backend
> umount: /mnt/backend: not mounted.
> + modprobe -r nvme_loop
> + modprobe -r nvme_fabrics
> + modprobe -r nvmet
> + modprobe -r nvme
> + modprobe -r null_blk
> + tree /sys/kernel/config
> /sys/kernel/config
> 
> 0 directories, 0 files
> HEAD is now at 7bec02cef3d1 Merge branch 'nvme-5.18' of 
> git://git.infradead.org/nvme into nvme-5.18
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-11 18:31                           ` Alan Adamson
@ 2022-04-11 19:53                             ` Chaitanya Kulkarni
  2022-04-11 21:39                               ` Alan Adamson
  0 siblings, 1 reply; 23+ messages in thread
From: Chaitanya Kulkarni @ 2022-04-11 19:53 UTC (permalink / raw)
  To: Alan Adamson; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme


>> I was able to produce same admin-passthru error messages with my patch.
>> See below detailed execution with the script and the log, can you please
>> tell me what is missing ?
> 
> I reapplied your patch and reran my test and all looks good.  I must of applied the
> incorrect patch last week.
> 

That's what I thought, it will be great if you can send tested by tag
this so we can send out later this week :-

http://lists.infradead.org/pipermail/linux-nvme/2022-April/031349.html


> Alan
> 
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] nvme-core: mark passthru requests RQF_QUIET flag
  2022-04-11 19:53                             ` Chaitanya Kulkarni
@ 2022-04-11 21:39                               ` Alan Adamson
  0 siblings, 0 replies; 23+ messages in thread
From: Alan Adamson @ 2022-04-11 21:39 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Sagi Grimberg, hch, Keith Busch, linux-nvme

- Verified the error log is no longer being seen at boot up.
- Verified errors can still be logged while using 'nvme admin-passthru’.

Tested-by: Alan Adamson <alan.adamson@oracle.com>

> On Apr 11, 2022, at 12:53 PM, Chaitanya Kulkarni <chaitanyak@nvidia.com> wrote:
> 
> 
>>> I was able to produce same admin-passthru error messages with my patch.
>>> See below detailed execution with the script and the log, can you please
>>> tell me what is missing ?
>> 
>> I reapplied your patch and reran my test and all looks good.  I must of applied the
>> incorrect patch last week.
>> 
> 
> That's what I thought, it will be great if you can send tested by tag
> this so we can send out later this week :-
> 
> http://lists.infradead.org/pipermail/linux-nvme/2022-April/031349.html
> 
> 
>> Alan
>> 
>> 
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2022-04-11 21:39 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06 16:41 [PATCH] nvme-core: mark passthru requests RQF_QUIET flag Chaitanya Kulkarni
2022-04-06 16:52 ` Keith Busch
2022-04-06 17:01   ` Chaitanya Kulkarni
2022-04-06 17:21     ` Keith Busch
2022-04-06 22:06       ` Alan Adamson
2022-04-06 22:16         ` Chaitanya Kulkarni
2022-04-06 23:29           ` Alan Adamson
2022-04-07 19:40             ` Chaitanya Kulkarni
2022-04-07  8:48           ` Christoph Hellwig
2022-04-07  8:51         ` hch
2022-04-07 11:52           ` Sagi Grimberg
2022-04-07 20:10             ` Chaitanya Kulkarni
2022-04-07 20:35               ` Alan Adamson
2022-04-07 21:00                 ` Chaitanya Kulkarni
2022-04-07 21:13                   ` Alan Adamson
2022-04-08  2:13                     ` Chaitanya Kulkarni
2022-04-08 16:24                       ` Alan Adamson
2022-04-09  0:10                         ` Chaitanya Kulkarni
2022-04-11 18:31                           ` Alan Adamson
2022-04-11 19:53                             ` Chaitanya Kulkarni
2022-04-11 21:39                               ` Alan Adamson
2022-04-07 16:15           ` Alan Adamson
2022-04-06 17:03   ` Chaitanya Kulkarni

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.