All of lore.kernel.org
 help / color / mirror / Atom feed
* blktests test nvme/003 fails
@ 2020-05-27 23:55 Bart Van Assche
  2020-06-02  7:42 ` Chaitanya Kulkarni
  2020-06-03  8:43 ` Chaitanya Kulkarni
  0 siblings, 2 replies; 13+ messages in thread
From: Bart Van Assche @ 2020-05-27 23:55 UTC (permalink / raw)
  Cc: linux-nvme

Hi,

This morning I updated my local copy of Jens' for-next branch. Since
that update test nvme/003 fails. Is this perhaps a regression? This
is what appears in the system log if I run that test:

 run blktests nvme/003 at 2020-05-27 16:33:49
loop: module loaded
nvmet: adding nsid 1 to subsystem blktests-subsystem-1
nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:47f11c7a-c9c3-4964-b450-d1818ee33113.
nvme nvme0: new ctrl: "nqn.2014-08.org.nvmexpress.discovery"
nvme nvme0: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
nvmet: ctrl 1 keep-alive timer (15 seconds) expired!
nvmet: ctrl 1 fatal error occurred!
INFO: task nvme:992 blocked for more than 122 seconds.
      Not tainted 5.7.0-rc7-dbg+ #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
nvme            D27584   992    917 0x00004000
Call Trace:
 __schedule+0x4e4/0xf80
 schedule+0x7f/0x170
 schedule_timeout+0x171/0x1b0
 wait_for_completion+0x126/0x1b0
 nvmet_sq_destroy+0x85/0x100 [nvmet]
 nvme_loop_destroy_admin_queue+0x47/0x90 [nvme_loop]
 nvme_loop_shutdown_ctrl+0xbc/0xd0 [nvme_loop]
 nvme_loop_delete_ctrl_host+0x19/0x20 [nvme_loop]
 nvme_do_delete_ctrl+0x97/0xb0 [nvme_core]
 nvme_sysfs_delete+0xb8/0xd0 [nvme_core]
 dev_attr_store+0x42/0x60
 sysfs_kf_write+0x8b/0xb0
 kernfs_fop_write+0x158/0x250
 __vfs_write+0x4c/0x90
 vfs_write+0x14b/0x2d0
 ksys_write+0xdd/0x180
 __x64_sys_write+0x47/0x50
 do_syscall_64+0x6f/0x310
 entry_SYSCALL_64_after_hwframe+0x49/0xb3
RIP: 0033:0x7f6f8115b057
Code: Bad RIP value.
RSP: 002b:00007fff13095688 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6f8115b057
RDX: 0000000000000001 RSI: 0000557079b5e9a7 RDI: 0000000000000004
RBP: 0000000000000004 R08: 0000557079b5f470 R09: 00007fff130957a0
R10: 0000000000000000 R11: 0000000000000246 R12: 000055707a99a3f0
R13: 00007fff13096da0 R14: 0000000000000003 R15: 00007fff130956e0

Showing all locks held in the system:
1 lock held by khungtaskd/52:
 #0: ffffffff827e5140 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire.constprop.0+0x0/0x30
3 locks held by nvme/992:
 #0: ffff8881dfac8430 (sb_writers#6){.+.+}-{0:0}, at: vfs_write+0x27b/0x2d0
 #1: ffff8881f20d3488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write+0xfa/0x250
 #2: ffff8881e132b3f8 (kn->count#167){++++}-{0:0}, at: kernfs_remove_self+0x1b7/0x250


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-05-27 23:55 blktests test nvme/003 fails Bart Van Assche
@ 2020-06-02  7:42 ` Chaitanya Kulkarni
  2020-06-03  8:43 ` Chaitanya Kulkarni
  1 sibling, 0 replies; 13+ messages in thread
From: Chaitanya Kulkarni @ 2020-06-02  7:42 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: linux-nvme

On 5/27/20 4:55 PM, Bart Van Assche wrote:
> Hi,
> 
> This morning I updated my local copy of Jens' for-next branch. Since
> that update test nvme/003 fails. Is this perhaps a regression? This
> is what appears in the system log if I run that test:
> 
>   run blktests nvme/003 at 2020-05-27 16:33:49
> loop: module loaded
> nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> nvmet: creating controller 1 for subsystem nqn.2014-08.org.nvmexpress.discovery for NQN nqn.2014-08.org.nvmexpress:uuid:47f11c7a-c9c3-4964-b450-d1818ee33113.
> nvme nvme0: new ctrl: "nqn.2014-08.org.nvmexpress.discovery"
> nvme nvme0: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
> nvmet: ctrl 1 keep-alive timer (15 seconds) expired!
> nvmet: ctrl 1 fatal error occurred!
>

Something is wrong I'm also getting similar error when disconnecting
controller with simple nvme-loop bdev-ns, looking into it now, unless
someone already has a fix for it.

nvmet: ctrl 1 keep-alive timer (15 seconds) expired!
[  552.633034] nvmet: ctrl 1 fatal error occurred!


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-05-27 23:55 blktests test nvme/003 fails Bart Van Assche
  2020-06-02  7:42 ` Chaitanya Kulkarni
@ 2020-06-03  8:43 ` Chaitanya Kulkarni
  2020-06-03 21:01   ` Sagi Grimberg
  1 sibling, 1 reply; 13+ messages in thread
From: Chaitanya Kulkarni @ 2020-06-03  8:43 UTC (permalink / raw)
  To: Bart Van Assche; +Cc: Sagi Grimberg, linux-nvme

Bart,

(+Sagi)

On 5/27/20 4:55 PM, Bart Van Assche wrote:
> Hi,
> 
> This morning I updated my local copy of Jens' for-next branch. Since
> that update test nvme/003 fails. Is this perhaps a regression? This
> is what appears in the system log if I run that test:

Can you please let me know if the following patch fixes your problem ?

 From e2b5e0bc63d6544feda4354c92c6c9fab11a3649 Mon Sep 17 00:00:00 2001
From: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Date: Wed, 3 Jun 2020 01:31:26 -0700
Subject: [PATCH] nvmet: free outstanding host AEN req

In function nvmet_async_event_process() we only process AENs iff
there is an open slot on the ctrl->async_event_cmds[] && aen
event list posted by the target is not empty. This keeps host
posted AEN outstanding if target generated AEN list is empty.
We do cleanup the target generated entries from the aen list in
nvmet_ctrl_free()-> nvmet_async_events_free() but we don't
process AEN posted by the host. This leads to following problem :-

admin sq at the time of nvmet_sq_destroy() holds an extra percpu
reference(atomic value = 1), so in the following code path after
switching to atomic rcu, release function (nvmet_sq_free()) is not
getting called which blocks the sq->free_done in nvmet_sq_destroy() :-

nvmet_sq_destroy()
  percpu_ref_kill_and_confirm()
  - __percpu_ref_switch_mode()
  --  __percpu_ref_switch_to_atomic()
  ---   call_rcu() -> percpu_ref_switch_to_atomic_rcu()
  ----     /* calls switch callback */
  - percpu_ref_put()
  -- percpu_ref_put_many(ref, 1)
  --- else if (unlikely(atomic_long_sub_and_test(nr, &ref->count)))
  ----	ref->release(ref); <---- Not called.

This results in indefinite hang:-

  780 void nvmet_sq_destroy(struct nvmet_sq *sq)
...
  789         if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
  790                 nvmet_async_events_process(ctrl, status);
  791                 percpu_ref_put(&sq->ref);
  792         }
  793         percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
  794         wait_for_completion(&sq->confirm_done);
  795         wait_for_completion(&sq->free_done); <-- Hang here

Which breaks the further disconnect sequence. This problem seems to be
introduced after commit 64f5e9cdd711b ("nvmet: fix memory leak when removing
namespaces and controllers concurrently").

This patch processes the ctrl->async_event_cmd[] until there are no cmds
available in array irrespective of aen list if empty or not and uses aen
list entry if available.

Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
---
  drivers/nvme/target/core.c | 16 ++++++++++------
  1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 6392bcd30bd7..40d80b785ecf 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -135,15 +135,19 @@ static void nvmet_async_events_process(struct 
nvmet_ctrl *ctrl, u16 status)
  	struct nvmet_req *req;

  	mutex_lock(&ctrl->lock);
-	while (ctrl->nr_async_event_cmds && !list_empty(&ctrl->async_events)) {
-		aen = list_first_entry(&ctrl->async_events,
-				       struct nvmet_async_event, entry);
+	while (ctrl->nr_async_event_cmds) {
  		req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
-		if (status == 0)
+		aen = NULL;
+		if (!list_empty(&ctrl->async_events))
+			aen = list_first_entry(&ctrl->async_events,
+				       struct nvmet_async_event, entry);
+		if (status == 0 && aen)
  			nvmet_set_result(req, nvmet_async_event_result(aen));

-		list_del(&aen->entry);
-		kfree(aen);
+		if (aen) {
+			list_del(&aen->entry);
+			kfree(aen);
+		}

  		mutex_unlock(&ctrl->lock);
  		trace_nvmet_async_event(ctrl, req->cqe->result.u32);
-- 
2.22.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-03  8:43 ` Chaitanya Kulkarni
@ 2020-06-03 21:01   ` Sagi Grimberg
  2020-06-03 22:31     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 13+ messages in thread
From: Sagi Grimberg @ 2020-06-03 21:01 UTC (permalink / raw)
  To: Chaitanya Kulkarni, Bart Van Assche; +Cc: linux-nvme



On 6/3/20 1:43 AM, Chaitanya Kulkarni wrote:
> Bart,
> 
> (+Sagi)
> 
> On 5/27/20 4:55 PM, Bart Van Assche wrote:
>> Hi,
>>
>> This morning I updated my local copy of Jens' for-next branch. Since
>> that update test nvme/003 fails. Is this perhaps a regression? This
>> is what appears in the system log if I run that test:
> 
> Can you please let me know if the following patch fixes your problem ?
> 
>   From e2b5e0bc63d6544feda4354c92c6c9fab11a3649 Mon Sep 17 00:00:00 2001
> From: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> Date: Wed, 3 Jun 2020 01:31:26 -0700
> Subject: [PATCH] nvmet: free outstanding host AEN req
> 
> In function nvmet_async_event_process() we only process AENs iff
> there is an open slot on the ctrl->async_event_cmds[] && aen
> event list posted by the target is not empty. This keeps host
> posted AEN outstanding if target generated AEN list is empty.
> We do cleanup the target generated entries from the aen list in
> nvmet_ctrl_free()-> nvmet_async_events_free() but we don't
> process AEN posted by the host. This leads to following problem :-
> 
> admin sq at the time of nvmet_sq_destroy() holds an extra percpu
> reference(atomic value = 1), so in the following code path after
> switching to atomic rcu, release function (nvmet_sq_free()) is not
> getting called which blocks the sq->free_done in nvmet_sq_destroy() :-
> 
> nvmet_sq_destroy()
>    percpu_ref_kill_and_confirm()
>    - __percpu_ref_switch_mode()
>    --  __percpu_ref_switch_to_atomic()
>    ---   call_rcu() -> percpu_ref_switch_to_atomic_rcu()
>    ----     /* calls switch callback */
>    - percpu_ref_put()
>    -- percpu_ref_put_many(ref, 1)
>    --- else if (unlikely(atomic_long_sub_and_test(nr, &ref->count)))
>    ----	ref->release(ref); <---- Not called.
> 
> This results in indefinite hang:-
> 
>    780 void nvmet_sq_destroy(struct nvmet_sq *sq)
> ...
>    789         if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
>    790                 nvmet_async_events_process(ctrl, status);
>    791                 percpu_ref_put(&sq->ref);
>    792         }
>    793         percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
>    794         wait_for_completion(&sq->confirm_done);
>    795         wait_for_completion(&sq->free_done); <-- Hang here
> 
> Which breaks the further disconnect sequence. This problem seems to be
> introduced after commit 64f5e9cdd711b ("nvmet: fix memory leak when removing
> namespaces and controllers concurrently").
> 
> This patch processes the ctrl->async_event_cmd[] until there are no cmds
> available in array irrespective of aen list if empty or not and uses aen
> list entry if available.
> 
> Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
> ---
>    drivers/nvme/target/core.c | 16 ++++++++++------
>    1 file changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> index 6392bcd30bd7..40d80b785ecf 100644
> --- a/drivers/nvme/target/core.c
> +++ b/drivers/nvme/target/core.c
> @@ -135,15 +135,19 @@ static void nvmet_async_events_process(struct
> nvmet_ctrl *ctrl, u16 status)
>    	struct nvmet_req *req;
> 
>    	mutex_lock(&ctrl->lock);
> -	while (ctrl->nr_async_event_cmds && !list_empty(&ctrl->async_events)) {
> -		aen = list_first_entry(&ctrl->async_events,
> -				       struct nvmet_async_event, entry);
> +	while (ctrl->nr_async_event_cmds) {
>    		req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
> -		if (status == 0)
> +		aen = NULL;
> +		if (!list_empty(&ctrl->async_events))
> +			aen = list_first_entry(&ctrl->async_events,
> +				       struct nvmet_async_event, entry);

Just use list_first_entry_or_null

> +		if (status == 0 && aen)
>    			nvmet_set_result(req, nvmet_async_event_result(aen));
> 
> -		list_del(&aen->entry);
> -		kfree(aen);
> +		if (aen) {
> +			list_del(&aen->entry);
> +			kfree(aen);
> +		}

You already condition aen when setting the result, just free it after
you use it.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-03 21:01   ` Sagi Grimberg
@ 2020-06-03 22:31     ` Chaitanya Kulkarni
  2020-06-04  6:25       ` Chaitanya Kulkarni
  0 siblings, 1 reply; 13+ messages in thread
From: Chaitanya Kulkarni @ 2020-06-03 22:31 UTC (permalink / raw)
  To: Sagi Grimberg, Bart Van Assche; +Cc: linux-nvme

On 6/3/20 2:01 PM, Sagi Grimberg wrote:
>> Which breaks the further disconnect sequence. This problem seems to be
>> introduced after commit 64f5e9cdd711b ("nvmet: fix memory leak when removing
>> namespaces and controllers concurrently").
>>
>> This patch processes the ctrl->async_event_cmd[] until there are no cmds
>> available in array irrespective of aen list if empty or not and uses aen
>> list entry if available.
>>
>> Signed-off-by: Chaitanya Kulkarni<chaitanya.kulkarni@wdc.com>
>> ---
>>     drivers/nvme/target/core.c | 16 ++++++++++------
>>     1 file changed, 10 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
>> index 6392bcd30bd7..40d80b785ecf 100644
>> --- a/drivers/nvme/target/core.c
>> +++ b/drivers/nvme/target/core.c
>> @@ -135,15 +135,19 @@ static void nvmet_async_events_process(struct
>> nvmet_ctrl *ctrl, u16 status)
>>     	struct nvmet_req *req;
>>
>>     	mutex_lock(&ctrl->lock);
>> -	while (ctrl->nr_async_event_cmds && !list_empty(&ctrl->async_events)) {
>> -		aen = list_first_entry(&ctrl->async_events,
>> -				       struct nvmet_async_event, entry);
>> +	while (ctrl->nr_async_event_cmds) {
>>     		req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
>> -		if (status == 0)
>> +		aen = NULL;
>> +		if (!list_empty(&ctrl->async_events))
>> +			aen = list_first_entry(&ctrl->async_events,
>> +				       struct nvmet_async_event, entry);
> Just use list_first_entry_or_null
> 
>> +		if (status == 0 && aen)
>>     			nvmet_set_result(req, nvmet_async_event_result(aen));
>>
>> -		list_del(&aen->entry);
>> -		kfree(aen);
>> +		if (aen) {
>> +			list_del(&aen->entry);
>> +			kfree(aen);
>> +		}
> You already condition aen when setting the result, just free it after
> you use it.
> 

Sounds good. I'll send V2.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-03 22:31     ` Chaitanya Kulkarni
@ 2020-06-04  6:25       ` Chaitanya Kulkarni
  2020-06-04 15:14         ` Sagi Grimberg
  0 siblings, 1 reply; 13+ messages in thread
From: Chaitanya Kulkarni @ 2020-06-04  6:25 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Bart Van Assche, linux-nvme

Sagi,

On 6/3/20 3:31 PM, Chaitanya Kulkarni wrote:
>> You already condition aen when setting the result, just free it after
>> you use it.
>>
> Sounds good. I'll send V2.
> 

What do you think of the following fix ? This is much clear and simple,
than modifying nvmet_async_events_process().

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 6392bcd30bd7..b494a902c3fc 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -777,6 +777,20 @@ static void nvmet_confirm_sq(struct percpu_ref *ref)
         complete(&sq->confirm_done);
  }

+static void nvmet_async_events_free_host_req(struct nvmet_ctrl *ctrl)
+{
+       struct nvmet_req *req;
+
+       mutex_lock(&ctrl->lock);
+       while (ctrl->nr_async_event_cmds) {
+               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
+               mutex_unlock(&ctrl->lock);
+               nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+               mutex_lock(&ctrl->lock);
+       }
+       mutex_unlock(&ctrl->lock);
+}
+
  void nvmet_sq_destroy(struct nvmet_sq *sq)
  {
         u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
@@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
          * If this is the admin queue, complete all AERs so that our
          * queue doesn't have outstanding requests on it.
          */
-       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
+       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
                 nvmet_async_events_process(ctrl, status);
+               /*
+                * Target controller's host posted events needs to be 
explicitly
+                * checked and cleared since there is no 1 : 1 mapping 
between
+                * host posted AEN requests and target generated AENs on the
+                * target controller's aen_list.
+                */
+               nvmet_async_events_free_host_req(ctrl);
+       }
         percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
         wait_for_completion(&sq->confirm_done);
         wait_for_completion(&sq->free_done);




_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-04  6:25       ` Chaitanya Kulkarni
@ 2020-06-04 15:14         ` Sagi Grimberg
  2020-06-05 23:15           ` Sagi Grimberg
  0 siblings, 1 reply; 13+ messages in thread
From: Sagi Grimberg @ 2020-06-04 15:14 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Bart Van Assche, linux-nvme



On 6/3/20 11:25 PM, Chaitanya Kulkarni wrote:
> Sagi,
> 
> On 6/3/20 3:31 PM, Chaitanya Kulkarni wrote:
>>> You already condition aen when setting the result, just free it after
>>> you use it.
>>>
>> Sounds good. I'll send V2.
>>
> 
> What do you think of the following fix ? This is much clear and simple,
> than modifying nvmet_async_events_process().
> 
> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> index 6392bcd30bd7..b494a902c3fc 100644
> --- a/drivers/nvme/target/core.c
> +++ b/drivers/nvme/target/core.c
> @@ -777,6 +777,20 @@ static void nvmet_confirm_sq(struct percpu_ref *ref)
>           complete(&sq->confirm_done);
>    }
> 
> +static void nvmet_async_events_free_host_req(struct nvmet_ctrl *ctrl)
> +{
> +       struct nvmet_req *req;
> +
> +       mutex_lock(&ctrl->lock);
> +       while (ctrl->nr_async_event_cmds) {
> +               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
> +               mutex_unlock(&ctrl->lock);
> +               nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
> +               mutex_lock(&ctrl->lock);
> +       }
> +       mutex_unlock(&ctrl->lock);
> +}
> +
>    void nvmet_sq_destroy(struct nvmet_sq *sq)
>    {
>           u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
> @@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
>            * If this is the admin queue, complete all AERs so that our
>            * queue doesn't have outstanding requests on it.
>            */
> -       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
> +       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
>                   nvmet_async_events_process(ctrl, status);
> +               /*
> +                * Target controller's host posted events needs to be
> explicitly
> +                * checked and cleared since there is no 1 : 1 mapping
> between
> +                * host posted AEN requests and target generated AENs on the
> +                * target controller's aen_list.
> +                */
> +               nvmet_async_events_free_host_req(ctrl);

Call it nvmet_async_events_fail_all(ctrl);

I think the older was better though.. Can you send the latest one to see
it side by side?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-04 15:14         ` Sagi Grimberg
@ 2020-06-05 23:15           ` Sagi Grimberg
  2020-06-08  4:58             ` Chaitanya Kulkarni
  0 siblings, 1 reply; 13+ messages in thread
From: Sagi Grimberg @ 2020-06-05 23:15 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Bart Van Assche, linux-nvme


>>    void nvmet_sq_destroy(struct nvmet_sq *sq)
>>    {
>>           u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
>> @@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
>>            * If this is the admin queue, complete all AERs so that our
>>            * queue doesn't have outstanding requests on it.
>>            */
>> -       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
>> +       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
>>                   nvmet_async_events_process(ctrl, status);
>> +               /*
>> +                * Target controller's host posted events needs to be
>> explicitly
>> +                * checked and cleared since there is no 1 : 1 mapping
>> between
>> +                * host posted AEN requests and target generated AENs 
>> on the
>> +                * target controller's aen_list.
>> +                */
>> +               nvmet_async_events_free_host_req(ctrl);
> 
> Call it nvmet_async_events_fail_all(ctrl);
> 
> I think the older was better though.. Can you send the latest one to see
> it side by side?

Are you sending a patch Chaitanya?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-05 23:15           ` Sagi Grimberg
@ 2020-06-08  4:58             ` Chaitanya Kulkarni
  2020-06-08  5:34               ` Sagi Grimberg
  0 siblings, 1 reply; 13+ messages in thread
From: Chaitanya Kulkarni @ 2020-06-08  4:58 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Bart Van Assche, linux-nvme

Sagi,

On 6/5/20 4:15 PM, Sagi Grimberg wrote:
> 
>>>     void nvmet_sq_destroy(struct nvmet_sq *sq)
>>>     {
>>>            u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
>>> @@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
>>>             * If this is the admin queue, complete all AERs so that our
>>>             * queue doesn't have outstanding requests on it.
>>>             */
>>> -       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
>>> +       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
>>>                    nvmet_async_events_process(ctrl, status);
>>> +               /*
>>> +                * Target controller's host posted events needs to be
>>> explicitly
>>> +                * checked and cleared since there is no 1 : 1 mapping
>>> between
>>> +                * host posted AEN requests and target generated AENs
>>> on the
>>> +                * target controller's aen_list.
>>> +                */
>>> +               nvmet_async_events_free_host_req(ctrl);
>>
>> Call it nvmet_async_events_fail_all(ctrl);
>>
>> I think the older was better though.. Can you send the latest one to see
>> it side by side?
> 
> Are you sending a patch Chaitanya?
> 
Sorry for delay.

Here is initial patch with modification to [1] as it had a bug which I 
fixed here that clears out outstanding AENs in the 
nvmet_async_event_process() :-

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 6392bcd30bd7..843da121cddf 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -139,12 +139,26 @@ static void nvmet_async_events_process(struct 
nvmet_ctrl *ctrl, u16 status)
                 aen = list_first_entry(&ctrl->async_events,
                                        struct nvmet_async_event, entry);
                 req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
-               if (status == 0)
+               if (status == 0 && aen) {
                         nvmet_set_result(req, 
nvmet_async_event_result(aen));
-
-               list_del(&aen->entry);
+                       list_del(&aen->entry);
+               }
                 kfree(aen);

+               trace_nvmet_async_event(ctrl, req->cqe->result.u32);
+               nvmet_req_complete(req, status);
+               mutex_lock(&ctrl->lock);
+       }
+       /*
+        * When status != 0 we are called from nvmet_sq_destroy() 
context that
+        * means we need to complete remaining host posted outstanding 
requests
+        * in ctrl->nr_async_cmds[] which doesn't have 1:1 mapping onto
+        * ctrl->async_event list in order to put the reference on the 
req(s)
+        * which was taken by outstanding req(s) so that we can make 
progress in
+        * nvmet_sq_destroy()-> wait_for completion(&sq->free_done).
+        */
+       while (status != 0 && ctrl->nr_async_event_cmds) {
+               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
                 mutex_unlock(&ctrl->lock);
                 trace_nvmet_async_event(ctrl, req->cqe->result.u32);
                 nvmet_req_complete(req, status);


Here is the new patch which clears up the outstanding AENs in a separate 
function from [2]:-

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 6392bcd30bd7..b494a902c3fc 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -777,6 +777,20 @@ static void nvmet_confirm_sq(struct percpu_ref *ref)
          complete(&sq->confirm_done);
   }

+static void nvmet_async_events_free_host_req(struct nvmet_ctrl *ctrl)
+{
+       struct nvmet_req *req;
+
+       mutex_lock(&ctrl->lock);
+       while (ctrl->nr_async_event_cmds) {
+               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
+               mutex_unlock(&ctrl->lock);
+               nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
+               mutex_lock(&ctrl->lock);
+       }
+       mutex_unlock(&ctrl->lock);
+}
+
   void nvmet_sq_destroy(struct nvmet_sq *sq)
   {
          u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
@@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
           * If this is the admin queue, complete all AERs so that our
           * queue doesn't have outstanding requests on it.
           */
-       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
+       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
                  nvmet_async_events_process(ctrl, status);
+               /*
+                * Target controller's host posted events needs to be
explicitly
+                * checked and cleared since there is no 1 : 1 mapping
between
+                * host posted AEN requests and target generated AENs on the
+                * target controller's aen_list.
+                */
+               nvmet_async_events_free_host_req(ctrl);
+       }
          percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
          wait_for_completion(&sq->confirm_done);
          wait_for_completion(&sq->free_done);

[1]http://lists.infradead.org/pipermail/linux-nvme/2020-June/030823.html
[2]http://lists.infradead.org/pipermail/linux-nvme/2020-June/030839.html

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-08  4:58             ` Chaitanya Kulkarni
@ 2020-06-08  5:34               ` Sagi Grimberg
  2020-06-08  5:38                 ` Chaitanya Kulkarni
  2020-06-08  6:20                 ` Chaitanya Kulkarni
  0 siblings, 2 replies; 13+ messages in thread
From: Sagi Grimberg @ 2020-06-08  5:34 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Bart Van Assche, linux-nvme



On 6/7/20 9:58 PM, Chaitanya Kulkarni wrote:
> Sagi,
> 
> On 6/5/20 4:15 PM, Sagi Grimberg wrote:
>>
>>>>      void nvmet_sq_destroy(struct nvmet_sq *sq)
>>>>      {
>>>>             u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
>>>> @@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
>>>>              * If this is the admin queue, complete all AERs so that our
>>>>              * queue doesn't have outstanding requests on it.
>>>>              */
>>>> -       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
>>>> +       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
>>>>                     nvmet_async_events_process(ctrl, status);
>>>> +               /*
>>>> +                * Target controller's host posted events needs to be
>>>> explicitly
>>>> +                * checked and cleared since there is no 1 : 1 mapping
>>>> between
>>>> +                * host posted AEN requests and target generated AENs
>>>> on the
>>>> +                * target controller's aen_list.
>>>> +                */
>>>> +               nvmet_async_events_free_host_req(ctrl);
>>>
>>> Call it nvmet_async_events_fail_all(ctrl);
>>>
>>> I think the older was better though.. Can you send the latest one to see
>>> it side by side?
>>
>> Are you sending a patch Chaitanya?
>>
> Sorry for delay.
> 
> Here is initial patch with modification to [1] as it had a bug which I
> fixed here that clears out outstanding AENs in the
> nvmet_async_event_process() :-

Chaitanya,

While I liked this patch better, did you check if the events are
coming out correctly? when I ran this path I saw that I'm getting
constant NS_CHANGE events in udevadm monitor...

I think we want patch 2 instead...

> 
> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> index 6392bcd30bd7..843da121cddf 100644
> --- a/drivers/nvme/target/core.c
> +++ b/drivers/nvme/target/core.c
> @@ -139,12 +139,26 @@ static void nvmet_async_events_process(struct
> nvmet_ctrl *ctrl, u16 status)
>                   aen = list_first_entry(&ctrl->async_events,
>                                          struct nvmet_async_event, entry);
>                   req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
> -               if (status == 0)
> +               if (status == 0 && aen) {
>                           nvmet_set_result(req,
> nvmet_async_event_result(aen));
> -
> -               list_del(&aen->entry);
> +                       list_del(&aen->entry);
> +               }
>                   kfree(aen);
> 
> +               trace_nvmet_async_event(ctrl, req->cqe->result.u32);
> +               nvmet_req_complete(req, status);
> +               mutex_lock(&ctrl->lock);
> +       }
> +       /*
> +        * When status != 0 we are called from nvmet_sq_destroy()
> context that
> +        * means we need to complete remaining host posted outstanding
> requests
> +        * in ctrl->nr_async_cmds[] which doesn't have 1:1 mapping onto
> +        * ctrl->async_event list in order to put the reference on the
> req(s)
> +        * which was taken by outstanding req(s) so that we can make
> progress in
> +        * nvmet_sq_destroy()-> wait_for completion(&sq->free_done).
> +        */
> +       while (status != 0 && ctrl->nr_async_event_cmds) {
> +               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
>                   mutex_unlock(&ctrl->lock);
>                   trace_nvmet_async_event(ctrl, req->cqe->result.u32);
>                   nvmet_req_complete(req, status);
> 
> 
> Here is the new patch which clears up the outstanding AENs in a separate
> function from [2]:-
> 
> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> index 6392bcd30bd7..b494a902c3fc 100644
> --- a/drivers/nvme/target/core.c
> +++ b/drivers/nvme/target/core.c
> @@ -777,6 +777,20 @@ static void nvmet_confirm_sq(struct percpu_ref *ref)
>            complete(&sq->confirm_done);
>     }
> 
> +static void nvmet_async_events_free_host_req(struct nvmet_ctrl *ctrl)

Lets call it nvmet_async_events_failall and keep it as is..

> +{
> +       struct nvmet_req *req;
> +
> +       mutex_lock(&ctrl->lock);
> +       while (ctrl->nr_async_event_cmds) {
> +               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
> +               mutex_unlock(&ctrl->lock);
> +               nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
> +               mutex_lock(&ctrl->lock);
> +       }
> +       mutex_unlock(&ctrl->lock);
> +}
> +
>     void nvmet_sq_destroy(struct nvmet_sq *sq)
>     {
>            u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
> @@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
>             * If this is the admin queue, complete all AERs so that our
>             * queue doesn't have outstanding requests on it.
>             */
> -       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
> +       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
>                    nvmet_async_events_process(ctrl, status);
> +               /*
> +                * Target controller's host posted events needs to be
> explicitly
> +                * checked and cleared since there is no 1 : 1 mapping
> between
> +                * host posted AEN requests and target generated AENs on the
> +                * target controller's aen_list.
> +                */
> +               nvmet_async_events_free_host_req(ctrl);
> +       }
>            percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
>            wait_for_completion(&sq->confirm_done);
>            wait_for_completion(&sq->free_done);
> 
> [1]http://lists.infradead.org/pipermail/linux-nvme/2020-June/030823.html
> [2]http://lists.infradead.org/pipermail/linux-nvme/2020-June/030839.html
> 
> _______________________________________________
> linux-nvme mailing list
> linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
> 

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-08  5:34               ` Sagi Grimberg
@ 2020-06-08  5:38                 ` Chaitanya Kulkarni
  2020-06-08  6:20                 ` Chaitanya Kulkarni
  1 sibling, 0 replies; 13+ messages in thread
From: Chaitanya Kulkarni @ 2020-06-08  5:38 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Bart Van Assche, linux-nvme

>> Here is the new patch which clears up the outstanding AENs in a separate
>> function from [2]:-
>>
>> diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
>> index 6392bcd30bd7..b494a902c3fc 100644
>> --- a/drivers/nvme/target/core.c
>> +++ b/drivers/nvme/target/core.c
>> @@ -777,6 +777,20 @@ static void nvmet_confirm_sq(struct percpu_ref *ref)
>>             complete(&sq->confirm_done);
>>      }
>>
>> +static void nvmet_async_events_free_host_req(struct nvmet_ctrl *ctrl)
> 
> Lets call it nvmet_async_events_failall and keep it as is..

Yeah let me send an official patch with the name change.

> 
>> +{
>> +       struct nvmet_req *req;
>> +
>> +       mutex_lock(&ctrl->lock);
>> +       while (ctrl->nr_async_event_cmds) {
>> +               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
>> +               mutex_unlock(&ctrl->lock);
>> +               nvmet_req_complete(req, NVME_SC_INTERNAL | NVME_SC_DNR);
>> +               mutex_lock(&ctrl->lock);
>> +       }
>> +       mutex_unlock(&ctrl->lock);
>> +}
>> +
>>      void nvmet_sq_destroy(struct nvmet_sq *sq)
>>      {
>>             u16 status = NVME_SC_INTERNAL | NVME_SC_DNR;
>> @@ -786,8 +800,16 @@ void nvmet_sq_destroy(struct nvmet_sq *sq)
>>              * If this is the admin queue, complete all AERs so that our
>>              * queue doesn't have outstanding requests on it.
>>              */
>> -       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq)
>> +       if (ctrl && ctrl->sqs && ctrl->sqs[0] == sq) {
>>                     nvmet_async_events_process(ctrl, status);
>> +               /*
>> +                * Target controller's host posted events needs to be
>> explicitly
>> +                * checked and cleared since there is no 1 : 1 mapping
>> between
>> +                * host posted AEN requests and target generated AENs on the
>> +                * target controller's aen_list.
>> +                */
>> +               nvmet_async_events_free_host_req(ctrl);
>> +       }
>>             percpu_ref_kill_and_confirm(&sq->ref, nvmet_confirm_sq);
>>             wait_for_completion(&sq->confirm_done);
>>             wait_for_completion(&sq->free_done);
>>
>> [1]http://lists.infradead.org/pipermail/linux-nvme/2020-June/030823.html
>> [2]http://lists.infradead.org/pipermail/linux-nvme/2020-June/030839.html
>>
>> _______________________________________________
>> linux-nvme mailing list
>> linux-nvme@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-nvme
>>
> 


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-08  5:34               ` Sagi Grimberg
  2020-06-08  5:38                 ` Chaitanya Kulkarni
@ 2020-06-08  6:20                 ` Chaitanya Kulkarni
  2020-06-08 16:33                   ` Sagi Grimberg
  1 sibling, 1 reply; 13+ messages in thread
From: Chaitanya Kulkarni @ 2020-06-08  6:20 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Bart Van Assche, linux-nvme

Sagi,

On 6/7/20 10:34 PM, Sagi Grimberg wrote:
> Chaitanya,
> 
> While I liked this patch better, did you check if the events are
> coming out correctly? when I ran this path I saw that I'm getting
> constant NS_CHANGE events in udevadm monitor...
> 
> I think we want patch 2 instead...

Wait, that was the bug I fixed from initial patch to Bart's original
email which was running while loop infinitely since
ctrl->nr_async_event_cmds >=1 and never could get back to 0 due to
outstanding AEN, are you still getting that with following fix ?

With following fix I can run blkttests it does pass and only generates 
one AEN from target side :-

blktest :-
----------
blktests (master) # ./check tests/nvme/003
nvme/003 (test if we're sending keep-alives to a discovery controller) 
[passed]
     runtime  11.149s  ...  10.413s

Tracing :-
-----------
# echo 1 > /sys/kernel/debug/tracing/events/nvme/nvme_async_event/enable
# echo 1 > /sys/kernel/debug/tracing/events/nvmet/nvmet_async_event/enable
# cat /sys/kernel/debug/tracing/trace_pipe
             nvme-28716 [011] ...1  5485.088313: nvmet_async_event: 
nvmet1: NVME_AEN=0x000000 [NVME_AER_NOTICE_NS_CHANGED]
^C
#

Which seems to match the behavior prior to the problem.

Can you please check following patch if problem persist then I'll send
2nd version as we agreed. (Sorry for the confusion)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index 6392bcd30bd7..843da121cddf 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -139,12 +139,26 @@ static void nvmet_async_events_process(struct 
nvmet_ctrl *ctrl, u16 status)
                 aen = list_first_entry(&ctrl->async_events,
                                        struct nvmet_async_event, entry);
                 req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
-               if (status == 0)
+               if (status == 0 && aen) {
                         nvmet_set_result(req, 
nvmet_async_event_result(aen));
-
-               list_del(&aen->entry);
+                       list_del(&aen->entry);
+               }
                 kfree(aen);

+               trace_nvmet_async_event(ctrl, req->cqe->result.u32);
+               nvmet_req_complete(req, status);
+               mutex_lock(&ctrl->lock);
+       }
+       /*
+        * When status != 0 we are called from nvmet_sq_destroy() 
context that
+        * means we need to complete remaining host posted outstanding 
requests
+        * in ctrl->nr_async_cmds[] which doesn't have 1:1 mapping onto
+        * ctrl->async_event list in order to put the reference on the 
req(s)
+        * which was taken by outstanding req(s) so that we can make 
progress in
+        * nvmet_sq_destroy()-> wait_for completion(&sq->free_done).
+        */
+       while (status != 0 && ctrl->nr_async_event_cmds) {
+               req = ctrl->async_event_cmds[--ctrl->nr_async_event_cmds];
                 mutex_unlock(&ctrl->lock);
                 trace_nvmet_async_event(ctrl, req->cqe->result.u32);
                 nvmet_req_complete(req, status);


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: blktests test nvme/003 fails
  2020-06-08  6:20                 ` Chaitanya Kulkarni
@ 2020-06-08 16:33                   ` Sagi Grimberg
  0 siblings, 0 replies; 13+ messages in thread
From: Sagi Grimberg @ 2020-06-08 16:33 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: Bart Van Assche, linux-nvme


> Wait, that was the bug I fixed from initial patch to Bart's original
> email which was running while loop infinitely since
> ctrl->nr_async_event_cmds >=1 and never could get back to 0 due to
> outstanding AEN, are you still getting that with following fix ?
> 
> With following fix I can run blkttests it does pass and only generates
> one AEN from target side :-

If you tested it, I trust you.

Please send a formal patch, and add:
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-06-08 16:33 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-27 23:55 blktests test nvme/003 fails Bart Van Assche
2020-06-02  7:42 ` Chaitanya Kulkarni
2020-06-03  8:43 ` Chaitanya Kulkarni
2020-06-03 21:01   ` Sagi Grimberg
2020-06-03 22:31     ` Chaitanya Kulkarni
2020-06-04  6:25       ` Chaitanya Kulkarni
2020-06-04 15:14         ` Sagi Grimberg
2020-06-05 23:15           ` Sagi Grimberg
2020-06-08  4:58             ` Chaitanya Kulkarni
2020-06-08  5:34               ` Sagi Grimberg
2020-06-08  5:38                 ` Chaitanya Kulkarni
2020-06-08  6:20                 ` Chaitanya Kulkarni
2020-06-08 16:33                   ` Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.