All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 0/2] nvmet: avoid circular locking warning
@ 2023-11-02 14:19 Hannes Reinecke
  2023-11-02 14:19 ` [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue() Hannes Reinecke
  2023-11-02 14:19 ` [PATCH 2/2] nvmet-tcp: " Hannes Reinecke
  0 siblings, 2 replies; 19+ messages in thread
From: Hannes Reinecke @ 2023-11-02 14:19 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke

nvmet-rdma and nvmet-tcp trigger a circular locking warning when
tearing down; reason is a call to 'flush_workqueue' when creating
a new controller which tries to cover for the fact that old controller
instances might be in the process of tearing down.
However, this is pure speculation as we don't know (and don't check)
if there really _are_ controllers in shutdown.
And even if there were, that should be short-lived, and would have been
resolved by connecting just a tad later.
So this patch returns 'controller busy' if we really find ourselves in this
situation, allowing the caller to reconnect later.

Changes to the original version:
- Update the rdma patch to implement 'install_queue()'
- Include suggestions from Jens Axboe

Hannes Reinecke (2):
  nvmet-rdma: avoid circular locking dependency on install_queue()
  nvmet-tcp: avoid circular locking dependency on install_queue()

 drivers/nvme/target/rdma.c | 24 +++++++++++++++++++-----
 drivers/nvme/target/tcp.c  | 13 +++++++++++--
 2 files changed, 30 insertions(+), 7 deletions(-)

-- 
2.35.3



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-02 14:19 [PATCHv2 0/2] nvmet: avoid circular locking warning Hannes Reinecke
@ 2023-11-02 14:19 ` Hannes Reinecke
  2023-11-03  8:23   ` Christoph Hellwig
  2023-11-02 14:19 ` [PATCH 2/2] nvmet-tcp: " Hannes Reinecke
  1 sibling, 1 reply; 19+ messages in thread
From: Hannes Reinecke @ 2023-11-02 14:19 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke,
	Shin'ichiro Kawasaki

nvmet_rdma_install_queue() is driven from the ->io_work workqueue
function, but will call flush_workqueue() which might trigger
->release_work() which in itself calls flush_work on ->io_work.

To avoid that move the check for any pending queue disconnects
to the 'install_queue()' callback. This replicates what the tcp
code is already doing, and also allows us to return a
'controller busy' connect response until all disconnects
are completed.

Signed-off-by: Hannes Reinecke <hare@suse.de>
Tested-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
---
 drivers/nvme/target/rdma.c | 24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4597bca43a6d..8e011bdec3b0 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1582,11 +1582,6 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 		goto put_device;
 	}
 
-	if (queue->host_qid == 0) {
-		/* Let inflight controller teardown complete */
-		flush_workqueue(nvmet_wq);
-	}
-
 	ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
 	if (ret) {
 		/*
@@ -1975,6 +1970,24 @@ static void nvmet_rdma_remove_port(struct nvmet_port *nport)
 	kfree(port);
 }
 
+static u16 nvmet_rdma_install_queue(struct nvmet_sq *sq)
+{
+	if (sq->qid == 0) {
+		struct nvmet_rdma_queue *q;
+
+		mutex_lock(&nvmet_rdma_queue_mutex);
+		list_for_each_entry(q, &nvmet_rdma_queue_list, queue_list) {
+			if (q->state == NVMET_RDMA_Q_DISCONNECTING) {
+				mutex_unlock(&nvmet_rdma_queue_mutex);
+				/* Retry for pending controller teardown */
+				return NVME_SC_CONNECT_CTRL_BUSY;
+			}
+		}
+		mutex_unlock(&nvmet_rdma_queue_mutex);
+	}
+	return 0;
+}
+
 static void nvmet_rdma_disc_port_addr(struct nvmet_req *req,
 		struct nvmet_port *nport, char *traddr)
 {
@@ -2014,6 +2027,7 @@ static const struct nvmet_fabrics_ops nvmet_rdma_ops = {
 	.remove_port		= nvmet_rdma_remove_port,
 	.queue_response		= nvmet_rdma_queue_response,
 	.delete_ctrl		= nvmet_rdma_delete_ctrl,
+	.install_queue		= nvmet_rdma_install_queue,
 	.disc_traddr		= nvmet_rdma_disc_port_addr,
 	.get_mdts		= nvmet_rdma_get_mdts,
 	.get_max_queue_size	= nvmet_rdma_get_max_queue_size,
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/2] nvmet-tcp: avoid circular locking dependency on install_queue()
  2023-11-02 14:19 [PATCHv2 0/2] nvmet: avoid circular locking warning Hannes Reinecke
  2023-11-02 14:19 ` [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue() Hannes Reinecke
@ 2023-11-02 14:19 ` Hannes Reinecke
  1 sibling, 0 replies; 19+ messages in thread
From: Hannes Reinecke @ 2023-11-02 14:19 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke,
	Shin'ichiro Kawasaki

nvmet_tcp_install_queue() is driven from the ->io_work workqueue
function, but will call flush_workqueue() which might trigger
->release_work() which in itself calls flush_work on ->io_work.

To avoid that check for pending queue in disconnecting status,
and return 'controller busy' until all disconnects are completed.

Signed-off-by: Hannes Reinecke <hare@suse.de>
Tested-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
---
 drivers/nvme/target/tcp.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index b3c1cc6690f6..850e40eb70a0 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -2121,8 +2121,17 @@ static u16 nvmet_tcp_install_queue(struct nvmet_sq *sq)
 		container_of(sq, struct nvmet_tcp_queue, nvme_sq);
 
 	if (sq->qid == 0) {
-		/* Let inflight controller teardown complete */
-		flush_workqueue(nvmet_wq);
+		struct nvmet_tcp_queue *q;
+
+		mutex_lock(&nvmet_tcp_queue_mutex);
+		list_for_each_entry(q, &nvmet_tcp_queue_list, queue_list) {
+			if (q->state == NVMET_TCP_Q_DISCONNECTING) {
+				/* Retry for pending controller teardown */
+				mutex_unlock(&nvmet_tcp_queue_mutex);
+				return NVME_SC_CONNECT_CTRL_BUSY;
+			}
+		}
+		mutex_unlock(&nvmet_tcp_queue_mutex);
 	}
 
 	queue->nr_cmds = sq->size * 2;
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-02 14:19 ` [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue() Hannes Reinecke
@ 2023-11-03  8:23   ` Christoph Hellwig
  2023-11-03  8:53     ` Hannes Reinecke
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2023-11-03  8:23 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme,
	Shin'ichiro Kawasaki

On Thu, Nov 02, 2023 at 03:19:02PM +0100, Hannes Reinecke wrote:
> nvmet_rdma_install_queue() is driven from the ->io_work workqueue
> function, but will call flush_workqueue() which might trigger
> ->release_work() which in itself calls flush_work on ->io_work.
> 
> To avoid that move the check for any pending queue disconnects
> to the 'install_queue()' callback. This replicates what the tcp
> code is already doing, and also allows us to return a
> 'controller busy' connect response until all disconnects
> are completed.

So what are hosts going to do when they see NVME_SC_CONNECT_CTRL_BUSY?

I see no special casing for it in our host side code.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-03  8:23   ` Christoph Hellwig
@ 2023-11-03  8:53     ` Hannes Reinecke
  2023-11-03  9:19       ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Hannes Reinecke @ 2023-11-03  8:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, Keith Busch, linux-nvme, Shin'ichiro Kawasaki

On 11/3/23 09:23, Christoph Hellwig wrote:
> On Thu, Nov 02, 2023 at 03:19:02PM +0100, Hannes Reinecke wrote:
>> nvmet_rdma_install_queue() is driven from the ->io_work workqueue
>> function, but will call flush_workqueue() which might trigger
>> ->release_work() which in itself calls flush_work on ->io_work.
>>
>> To avoid that move the check for any pending queue disconnects
>> to the 'install_queue()' callback. This replicates what the tcp
>> code is already doing, and also allows us to return a
>> 'controller busy' connect response until all disconnects
>> are completed.
> 
> So what are hosts going to do when they see NVME_SC_CONNECT_CTRL_BUSY?
> 
> I see no special casing for it in our host side code.

Retry? The DNR bit is not set, so the default action should be to retry.
And the idea is that this condition is a very short-lived one anyway.
No?

Cheers,

Hannes



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-03  8:53     ` Hannes Reinecke
@ 2023-11-03  9:19       ` Christoph Hellwig
  2023-11-03 11:58         ` Hannes Reinecke
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2023-11-03  9:19 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme,
	Shin'ichiro Kawasaki

On Fri, Nov 03, 2023 at 09:53:05AM +0100, Hannes Reinecke wrote:
> Retry? The DNR bit is not set, so the default action should be to retry.
> And the idea is that this condition is a very short-lived one anyway.
> No?

Maybe.  Best to explicitly state what is going to happen and how you
tested it in the commit log..


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-03  9:19       ` Christoph Hellwig
@ 2023-11-03 11:58         ` Hannes Reinecke
  2023-11-03 14:05           ` Christoph Hellwig
  0 siblings, 1 reply; 19+ messages in thread
From: Hannes Reinecke @ 2023-11-03 11:58 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sagi Grimberg, Keith Busch, linux-nvme, Shin'ichiro Kawasaki

On 11/3/23 10:19, Christoph Hellwig wrote:
> On Fri, Nov 03, 2023 at 09:53:05AM +0100, Hannes Reinecke wrote:
>> Retry? The DNR bit is not set, so the default action should be to retry.
>> And the idea is that this condition is a very short-lived one anyway.
>> No?
> 
> Maybe.  Best to explicitly state what is going to happen and how you
> tested it in the commit log..

Hmm. Or we just kill it.
According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing 
controller teardown") this is just for reducing the memory footprint.
Wonder if we need to bother, and whether it won't be better to remove
the whole thing entirely.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@suse.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Frankenstr. 146, 90461 Nürnberg
Managing Directors: I. Totev, A. Myers, A. McDonald, M. B. Moerman
(HRB 36809, AG Nürnberg)



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-03 11:58         ` Hannes Reinecke
@ 2023-11-03 14:05           ` Christoph Hellwig
  2023-11-20 13:48             ` Sagi Grimberg
  0 siblings, 1 reply; 19+ messages in thread
From: Christoph Hellwig @ 2023-11-03 14:05 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme,
	Shin'ichiro Kawasaki

> According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing 
> controller teardown") this is just for reducing the memory footprint.
> Wonder if we need to bother, and whether it won't be better to remove
> the whole thing entirely.

Well, Sagi added it, so I'll let him chime in on the usefulness.



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-03 14:05           ` Christoph Hellwig
@ 2023-11-20 13:48             ` Sagi Grimberg
  2023-12-04 10:19               ` Sagi Grimberg
  0 siblings, 1 reply; 19+ messages in thread
From: Sagi Grimberg @ 2023-11-20 13:48 UTC (permalink / raw)
  To: Christoph Hellwig, Hannes Reinecke
  Cc: Keith Busch, linux-nvme, Shin'ichiro Kawasaki


>> According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing
>> controller teardown") this is just for reducing the memory footprint.
>> Wonder if we need to bother, and whether it won't be better to remove
>> the whole thing entirely.
> 
> Well, Sagi added it, so I'll let him chime in on the usefulness.

While I don't like having nvmet arbitrarily replying busy and instead
have lockdep simply just accept that its not a deadlock here, but we can
simply just sidetrack it as proposed I guess.

But Hannes, this is on the other extreme.. Now every connect that nvmet
gets, if there is even a single queue that is disconnecting (global
scope), then the host is denied. Lets give it a sane backlog.
We use rdma_listen backlog of 128, so maybe stick with this magic
number... This way we are busy only if more than 128 queues are tearing
down to prevent the memory footprint from exploding, and hopefully it is
rare enough that the normal host does not see an arbitrary busy
rejection.

Same comment for nvmet-tcp.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-20 13:48             ` Sagi Grimberg
@ 2023-12-04 10:19               ` Sagi Grimberg
  2023-12-04 11:49                 ` Hannes Reinecke
  0 siblings, 1 reply; 19+ messages in thread
From: Sagi Grimberg @ 2023-12-04 10:19 UTC (permalink / raw)
  To: Christoph Hellwig, Hannes Reinecke
  Cc: Keith Busch, linux-nvme, Shin'ichiro Kawasaki



On 11/20/23 15:48, Sagi Grimberg wrote:
> 
>>> According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing
>>> controller teardown") this is just for reducing the memory footprint.
>>> Wonder if we need to bother, and whether it won't be better to remove
>>> the whole thing entirely.
>>
>> Well, Sagi added it, so I'll let him chime in on the usefulness.
> 
> While I don't like having nvmet arbitrarily replying busy and instead
> have lockdep simply just accept that its not a deadlock here, but we can
> simply just sidetrack it as proposed I guess.
> 
> But Hannes, this is on the other extreme.. Now every connect that nvmet
> gets, if there is even a single queue that is disconnecting (global
> scope), then the host is denied. Lets give it a sane backlog.
> We use rdma_listen backlog of 128, so maybe stick with this magic
> number... This way we are busy only if more than 128 queues are tearing
> down to prevent the memory footprint from exploding, and hopefully it is
> rare enough that the normal host does not see an arbitrary busy
> rejection.
> 
> Same comment for nvmet-tcp.

Hey Hannes, anything happened with this one?

Overall I think that the approach is fine, but I do think
that we need a backlog for it.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-12-04 10:19               ` Sagi Grimberg
@ 2023-12-04 11:49                 ` Hannes Reinecke
  2023-12-04 11:57                   ` Sagi Grimberg
  0 siblings, 1 reply; 19+ messages in thread
From: Hannes Reinecke @ 2023-12-04 11:49 UTC (permalink / raw)
  To: Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, linux-nvme, Shin'ichiro Kawasaki

On 12/4/23 11:19, Sagi Grimberg wrote:
> 
> 
> On 11/20/23 15:48, Sagi Grimberg wrote:
>>
>>>> According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing
>>>> controller teardown") this is just for reducing the memory footprint.
>>>> Wonder if we need to bother, and whether it won't be better to remove
>>>> the whole thing entirely.
>>>
>>> Well, Sagi added it, so I'll let him chime in on the usefulness.
>>
>> While I don't like having nvmet arbitrarily replying busy and instead
>> have lockdep simply just accept that its not a deadlock here, but we can
>> simply just sidetrack it as proposed I guess.
>>
>> But Hannes, this is on the other extreme.. Now every connect that nvmet
>> gets, if there is even a single queue that is disconnecting (global
>> scope), then the host is denied. Lets give it a sane backlog.
>> We use rdma_listen backlog of 128, so maybe stick with this magic
>> number... This way we are busy only if more than 128 queues are tearing
>> down to prevent the memory footprint from exploding, and hopefully it is
>> rare enough that the normal host does not see an arbitrary busy
>> rejection.
>>
>> Same comment for nvmet-tcp.
> 
> Hey Hannes, anything happened with this one?
> 
> Overall I think that the approach is fine, but I do think
> that we need a backlog for it.

Hmm. The main issue here is the call to 'flush_workqueue()', which 
triggers the circular lock warning. So a ratelimit would only help
us so much; the real issue is to get rid of the flush_workqueue()
thingie.

What I can to is to add this:

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 4cc27856aa8f..72bcc54701a0 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -2119,8 +2119,20 @@ static u16 nvmet_tcp_install_queue(struct 
nvmet_sq *sq)
                 container_of(sq, struct nvmet_tcp_queue, nvme_sq);

         if (sq->qid == 0) {
+               struct nvmet_tcp_queue *q;
+               int pending = 0;
+
                 /* Let inflight controller teardown complete */
-               flush_workqueue(nvmet_wq);
+               mutex_lock(&nvmet_tcp_queue_mutex);
+               list_for_each_entry(q, &nvmet_tcp_queue_list, queue_list) {
+                       if (q->nvme_sq.ctrl == sq->ctrl &&
+                           q->state == NVMET_TCP_Q_DISCONNECTING)
+                               pending++;
+               }
+               mutex_unlock(&nvmet_tcp_queue_mutex);
+               /* Retry for pending controller teardown */
+               if (pending)
+                       return NVME_SC_CONNECT_CTRL_BUSY;
         }

which then would only affect the controller we're connecting to.
Hmm?

Cheers,

Hannes



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-12-04 11:49                 ` Hannes Reinecke
@ 2023-12-04 11:57                   ` Sagi Grimberg
  2023-12-04 12:31                     ` Hannes Reinecke
  0 siblings, 1 reply; 19+ messages in thread
From: Sagi Grimberg @ 2023-12-04 11:57 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig
  Cc: Keith Busch, linux-nvme, Shin'ichiro Kawasaki



On 12/4/23 13:49, Hannes Reinecke wrote:
> On 12/4/23 11:19, Sagi Grimberg wrote:
>>
>>
>> On 11/20/23 15:48, Sagi Grimberg wrote:
>>>
>>>>> According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing
>>>>> controller teardown") this is just for reducing the memory footprint.
>>>>> Wonder if we need to bother, and whether it won't be better to remove
>>>>> the whole thing entirely.
>>>>
>>>> Well, Sagi added it, so I'll let him chime in on the usefulness.
>>>
>>> While I don't like having nvmet arbitrarily replying busy and instead
>>> have lockdep simply just accept that its not a deadlock here, but we can
>>> simply just sidetrack it as proposed I guess.
>>>
>>> But Hannes, this is on the other extreme.. Now every connect that nvmet
>>> gets, if there is even a single queue that is disconnecting (global
>>> scope), then the host is denied. Lets give it a sane backlog.
>>> We use rdma_listen backlog of 128, so maybe stick with this magic
>>> number... This way we are busy only if more than 128 queues are tearing
>>> down to prevent the memory footprint from exploding, and hopefully it is
>>> rare enough that the normal host does not see an arbitrary busy
>>> rejection.
>>>
>>> Same comment for nvmet-tcp.
>>
>> Hey Hannes, anything happened with this one?
>>
>> Overall I think that the approach is fine, but I do think
>> that we need a backlog for it.
> 
> Hmm. The main issue here is the call to 'flush_workqueue()', which 
> triggers the circular lock warning. So a ratelimit would only help
> us so much; the real issue is to get rid of the flush_workqueue()
> thingie.
> 
> What I can to is to add this:
> 
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 4cc27856aa8f..72bcc54701a0 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -2119,8 +2119,20 @@ static u16 nvmet_tcp_install_queue(struct 
> nvmet_sq *sq)
>                  container_of(sq, struct nvmet_tcp_queue, nvme_sq);
> 
>          if (sq->qid == 0) {
> +               struct nvmet_tcp_queue *q;
> +               int pending = 0;
> +
>                  /* Let inflight controller teardown complete */
> -               flush_workqueue(nvmet_wq);
> +               mutex_lock(&nvmet_tcp_queue_mutex);
> +               list_for_each_entry(q, &nvmet_tcp_queue_list, queue_list) {
> +                       if (q->nvme_sq.ctrl == sq->ctrl &&
> +                           q->state == NVMET_TCP_Q_DISCONNECTING)
> +                               pending++;
> +               }
> +               mutex_unlock(&nvmet_tcp_queue_mutex);
> +               /* Retry for pending controller teardown */
> +               if (pending)
> +                       return NVME_SC_CONNECT_CTRL_BUSY;
>          }
> 
> which then would only affect the controller we're connecting to.
> Hmm?

Still I think we should give a reasonable backlog, no reason to limit
this as we may hit this more often than we'd like and the sole purpose
here is to avoid memory overrun.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-12-04 11:57                   ` Sagi Grimberg
@ 2023-12-04 12:31                     ` Hannes Reinecke
  2023-12-04 12:46                       ` Sagi Grimberg
  0 siblings, 1 reply; 19+ messages in thread
From: Hannes Reinecke @ 2023-12-04 12:31 UTC (permalink / raw)
  To: Sagi Grimberg, Christoph Hellwig
  Cc: Keith Busch, linux-nvme, Shin'ichiro Kawasaki

On 12/4/23 12:57, Sagi Grimberg wrote:
> 
> 
> On 12/4/23 13:49, Hannes Reinecke wrote:
>> On 12/4/23 11:19, Sagi Grimberg wrote:
>>>
>>>
>>> On 11/20/23 15:48, Sagi Grimberg wrote:
>>>>
>>>>>> According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing
>>>>>> controller teardown") this is just for reducing the memory footprint.
>>>>>> Wonder if we need to bother, and whether it won't be better to remove
>>>>>> the whole thing entirely.
>>>>>
>>>>> Well, Sagi added it, so I'll let him chime in on the usefulness.
>>>>
>>>> While I don't like having nvmet arbitrarily replying busy and instead
>>>> have lockdep simply just accept that its not a deadlock here, but we 
>>>> can
>>>> simply just sidetrack it as proposed I guess.
>>>>
>>>> But Hannes, this is on the other extreme.. Now every connect that nvmet
>>>> gets, if there is even a single queue that is disconnecting (global
>>>> scope), then the host is denied. Lets give it a sane backlog.
>>>> We use rdma_listen backlog of 128, so maybe stick with this magic
>>>> number... This way we are busy only if more than 128 queues are tearing
>>>> down to prevent the memory footprint from exploding, and hopefully 
>>>> it is
>>>> rare enough that the normal host does not see an arbitrary busy
>>>> rejection.
>>>>
>>>> Same comment for nvmet-tcp.
>>>
>>> Hey Hannes, anything happened with this one?
>>>
>>> Overall I think that the approach is fine, but I do think
>>> that we need a backlog for it.
>>
>> Hmm. The main issue here is the call to 'flush_workqueue()', which 
>> triggers the circular lock warning. So a ratelimit would only help
>> us so much; the real issue is to get rid of the flush_workqueue()
>> thingie.
>>
>> What I can to is to add this:
>>
>> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
>> index 4cc27856aa8f..72bcc54701a0 100644
>> --- a/drivers/nvme/target/tcp.c
>> +++ b/drivers/nvme/target/tcp.c
>> @@ -2119,8 +2119,20 @@ static u16 nvmet_tcp_install_queue(struct 
>> nvmet_sq *sq)
>>                  container_of(sq, struct nvmet_tcp_queue, nvme_sq);
>>
>>          if (sq->qid == 0) {
>> +               struct nvmet_tcp_queue *q;
>> +               int pending = 0;
>> +
>>                  /* Let inflight controller teardown complete */
>> -               flush_workqueue(nvmet_wq);
>> +               mutex_lock(&nvmet_tcp_queue_mutex);
>> +               list_for_each_entry(q, &nvmet_tcp_queue_list, 
>> queue_list) {
>> +                       if (q->nvme_sq.ctrl == sq->ctrl &&
>> +                           q->state == NVMET_TCP_Q_DISCONNECTING)
>> +                               pending++;
>> +               }
>> +               mutex_unlock(&nvmet_tcp_queue_mutex);
>> +               /* Retry for pending controller teardown */
>> +               if (pending)
>> +                       return NVME_SC_CONNECT_CTRL_BUSY;
>>          }
>>
>> which then would only affect the controller we're connecting to.
>> Hmm?
> 
> Still I think we should give a reasonable backlog, no reason to limit
> this as we may hit this more often than we'd like and the sole purpose
> here is to avoid memory overrun.

So would 'if (pending > tcp_backlog)' (with eg tcp_backlog = 20) fit the 
bill here?

Cheers,

Hannes



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-12-04 12:31                     ` Hannes Reinecke
@ 2023-12-04 12:46                       ` Sagi Grimberg
  2023-12-07  5:54                         ` Shinichiro Kawasaki
  0 siblings, 1 reply; 19+ messages in thread
From: Sagi Grimberg @ 2023-12-04 12:46 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig
  Cc: Keith Busch, linux-nvme, Shin'ichiro Kawasaki



On 12/4/23 14:31, Hannes Reinecke wrote:
> On 12/4/23 12:57, Sagi Grimberg wrote:
>>
>>
>> On 12/4/23 13:49, Hannes Reinecke wrote:
>>> On 12/4/23 11:19, Sagi Grimberg wrote:
>>>>
>>>>
>>>> On 11/20/23 15:48, Sagi Grimberg wrote:
>>>>>
>>>>>>> According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing
>>>>>>> controller teardown") this is just for reducing the memory 
>>>>>>> footprint.
>>>>>>> Wonder if we need to bother, and whether it won't be better to 
>>>>>>> remove
>>>>>>> the whole thing entirely.
>>>>>>
>>>>>> Well, Sagi added it, so I'll let him chime in on the usefulness.
>>>>>
>>>>> While I don't like having nvmet arbitrarily replying busy and instead
>>>>> have lockdep simply just accept that its not a deadlock here, but 
>>>>> we can
>>>>> simply just sidetrack it as proposed I guess.
>>>>>
>>>>> But Hannes, this is on the other extreme.. Now every connect that 
>>>>> nvmet
>>>>> gets, if there is even a single queue that is disconnecting (global
>>>>> scope), then the host is denied. Lets give it a sane backlog.
>>>>> We use rdma_listen backlog of 128, so maybe stick with this magic
>>>>> number... This way we are busy only if more than 128 queues are 
>>>>> tearing
>>>>> down to prevent the memory footprint from exploding, and hopefully 
>>>>> it is
>>>>> rare enough that the normal host does not see an arbitrary busy
>>>>> rejection.
>>>>>
>>>>> Same comment for nvmet-tcp.
>>>>
>>>> Hey Hannes, anything happened with this one?
>>>>
>>>> Overall I think that the approach is fine, but I do think
>>>> that we need a backlog for it.
>>>
>>> Hmm. The main issue here is the call to 'flush_workqueue()', which 
>>> triggers the circular lock warning. So a ratelimit would only help
>>> us so much; the real issue is to get rid of the flush_workqueue()
>>> thingie.
>>>
>>> What I can to is to add this:
>>>
>>> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
>>> index 4cc27856aa8f..72bcc54701a0 100644
>>> --- a/drivers/nvme/target/tcp.c
>>> +++ b/drivers/nvme/target/tcp.c
>>> @@ -2119,8 +2119,20 @@ static u16 nvmet_tcp_install_queue(struct 
>>> nvmet_sq *sq)
>>>                  container_of(sq, struct nvmet_tcp_queue, nvme_sq);
>>>
>>>          if (sq->qid == 0) {
>>> +               struct nvmet_tcp_queue *q;
>>> +               int pending = 0;
>>> +
>>>                  /* Let inflight controller teardown complete */
>>> -               flush_workqueue(nvmet_wq);
>>> +               mutex_lock(&nvmet_tcp_queue_mutex);
>>> +               list_for_each_entry(q, &nvmet_tcp_queue_list, 
>>> queue_list) {
>>> +                       if (q->nvme_sq.ctrl == sq->ctrl &&
>>> +                           q->state == NVMET_TCP_Q_DISCONNECTING)
>>> +                               pending++;
>>> +               }
>>> +               mutex_unlock(&nvmet_tcp_queue_mutex);
>>> +               /* Retry for pending controller teardown */
>>> +               if (pending)
>>> +                       return NVME_SC_CONNECT_CTRL_BUSY;
>>>          }
>>>
>>> which then would only affect the controller we're connecting to.
>>> Hmm?
>>
>> Still I think we should give a reasonable backlog, no reason to limit
>> this as we may hit this more often than we'd like and the sole purpose
>> here is to avoid memory overrun.
> 
> So would 'if (pending > tcp_backlog)' (with eg tcp_backlog = 20) fit the 
> bill here?

Yes I think so, we already place a listen backlog, so maybe we can reuse
that as NVME_TCP_BACKLOG or something. The same should apply to nvme-rdma.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-12-04 12:46                       ` Sagi Grimberg
@ 2023-12-07  5:54                         ` Shinichiro Kawasaki
  2023-12-07 12:17                           ` Sagi Grimberg
  0 siblings, 1 reply; 19+ messages in thread
From: Shinichiro Kawasaki @ 2023-12-07  5:54 UTC (permalink / raw)
  To: Sagi Grimberg; +Cc: Hannes Reinecke, Christoph Hellwig, Keith Busch, linux-nvme

On Dec 04, 2023 / 14:46, Sagi Grimberg wrote:
> 
> 
> On 12/4/23 14:31, Hannes Reinecke wrote:
> > On 12/4/23 12:57, Sagi Grimberg wrote:
> > > 
> > > 
> > > On 12/4/23 13:49, Hannes Reinecke wrote:
> > > > On 12/4/23 11:19, Sagi Grimberg wrote:
> > > > > 
> > > > > 
> > > > > On 11/20/23 15:48, Sagi Grimberg wrote:
> > > > > > 
> > > > > > > > According to 777dc82395de ("nvmet-rdma: occasionally flush ongoing
> > > > > > > > controller teardown") this is just for reducing
> > > > > > > > the memory footprint.
> > > > > > > > Wonder if we need to bother, and whether it
> > > > > > > > won't be better to remove
> > > > > > > > the whole thing entirely.
> > > > > > > 
> > > > > > > Well, Sagi added it, so I'll let him chime in on the usefulness.
> > > > > > 
> > > > > > While I don't like having nvmet arbitrarily replying busy and instead
> > > > > > have lockdep simply just accept that its not a deadlock
> > > > > > here, but we can
> > > > > > simply just sidetrack it as proposed I guess.
> > > > > > 
> > > > > > But Hannes, this is on the other extreme.. Now every
> > > > > > connect that nvmet
> > > > > > gets, if there is even a single queue that is disconnecting (global
> > > > > > scope), then the host is denied. Lets give it a sane backlog.
> > > > > > We use rdma_listen backlog of 128, so maybe stick with this magic
> > > > > > number... This way we are busy only if more than 128
> > > > > > queues are tearing
> > > > > > down to prevent the memory footprint from exploding, and
> > > > > > hopefully it is
> > > > > > rare enough that the normal host does not see an arbitrary busy
> > > > > > rejection.
> > > > > > 
> > > > > > Same comment for nvmet-tcp.
> > > > > 
> > > > > Hey Hannes, anything happened with this one?
> > > > > 
> > > > > Overall I think that the approach is fine, but I do think
> > > > > that we need a backlog for it.
> > > > 
> > > > Hmm. The main issue here is the call to 'flush_workqueue()',
> > > > which triggers the circular lock warning. So a ratelimit would
> > > > only help
> > > > us so much; the real issue is to get rid of the flush_workqueue()
> > > > thingie.
> > > > 
> > > > What I can to is to add this:
> > > > 
> > > > diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> > > > index 4cc27856aa8f..72bcc54701a0 100644
> > > > --- a/drivers/nvme/target/tcp.c
> > > > +++ b/drivers/nvme/target/tcp.c
> > > > @@ -2119,8 +2119,20 @@ static u16 nvmet_tcp_install_queue(struct
> > > > nvmet_sq *sq)
> > > >                  container_of(sq, struct nvmet_tcp_queue, nvme_sq);
> > > > 
> > > >          if (sq->qid == 0) {
> > > > +               struct nvmet_tcp_queue *q;
> > > > +               int pending = 0;
> > > > +
> > > >                  /* Let inflight controller teardown complete */
> > > > -               flush_workqueue(nvmet_wq);
> > > > +               mutex_lock(&nvmet_tcp_queue_mutex);
> > > > +               list_for_each_entry(q, &nvmet_tcp_queue_list,
> > > > queue_list) {
> > > > +                       if (q->nvme_sq.ctrl == sq->ctrl &&
> > > > +                           q->state == NVMET_TCP_Q_DISCONNECTING)
> > > > +                               pending++;
> > > > +               }
> > > > +               mutex_unlock(&nvmet_tcp_queue_mutex);
> > > > +               /* Retry for pending controller teardown */
> > > > +               if (pending)
> > > > +                       return NVME_SC_CONNECT_CTRL_BUSY;
> > > >          }
> > > > 
> > > > which then would only affect the controller we're connecting to.
> > > > Hmm?
> > > 
> > > Still I think we should give a reasonable backlog, no reason to limit
> > > this as we may hit this more often than we'd like and the sole purpose
> > > here is to avoid memory overrun.
> > 
> > So would 'if (pending > tcp_backlog)' (with eg tcp_backlog = 20) fit the
> > bill here?
> 
> Yes I think so, we already place a listen backlog, so maybe we can reuse
> that as NVME_TCP_BACKLOG or something. The same should apply to nvme-rdma.

Thanks for the discussion. FYI, I modified the patches by Hannes to reflect the
discussion above [1][2]. With these, I confirmed the lockdep WARN disappears,
and saw no regression in my blktests runs. Looks good. Will test again when
formal patches come out.

[1] https://github.com/kawasaki/linux/commit/dc58819d07b02bd54dbcd5b15d2600a517fcbbff
[2] https://github.com/kawasaki/linux/commit/20fb65d145152562d0e11a9c8065e64405c31b0d

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-12-07  5:54                         ` Shinichiro Kawasaki
@ 2023-12-07 12:17                           ` Sagi Grimberg
  0 siblings, 0 replies; 19+ messages in thread
From: Sagi Grimberg @ 2023-12-07 12:17 UTC (permalink / raw)
  To: Shinichiro Kawasaki
  Cc: Hannes Reinecke, Christoph Hellwig, Keith Busch, linux-nvme


>> Yes I think so, we already place a listen backlog, so maybe we can reuse
>> that as NVME_TCP_BACKLOG or something. The same should apply to nvme-rdma.
> 
> Thanks for the discussion. FYI, I modified the patches by Hannes to reflect the
> discussion above [1][2]. With these, I confirmed the lockdep WARN disappears,
> and saw no regression in my blktests runs. Looks good. Will test again when
> formal patches come out.
> 
> [1] https://github.com/kawasaki/linux/commit/dc58819d07b02bd54dbcd5b15d2600a517fcbbff
> [2] https://github.com/kawasaki/linux/commit/20fb65d145152562d0e11a9c8065e64405c31b0d

Can you/Hannes please submit them?


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-01 16:21   ` Jens Axboe
@ 2023-11-01 17:28     ` Hannes Reinecke
  0 siblings, 0 replies; 19+ messages in thread
From: Hannes Reinecke @ 2023-11-01 17:28 UTC (permalink / raw)
  To: Jens Axboe, Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme

On 11/1/23 17:21, Jens Axboe wrote:
> On 11/1/23 4:32 AM, Hannes Reinecke wrote:
>> nvmet_rdma_install_queue() is driven from the ->io_work workqueue
>> function, but will call flush_workqueue() which might trigger
>> ->release_work() which in itself calls flush_work on ->io_work.
>>
>> To avoid that check for pending queue in disconnecting status,
>> and return 'controller busy' until all disconnects are completed.
>>
>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>> ---
>>   drivers/nvme/target/rdma.c | 14 ++++++++++++--
>>   1 file changed, 12 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index 4597bca43a6d..eaeb94a9e863 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -1583,8 +1583,18 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
>>   	}
>>   
>>   	if (queue->host_qid == 0) {
>> -		/* Let inflight controller teardown complete */
>> -		flush_workqueue(nvmet_wq);
>> +		struct nvmet_rdma_queue *q;
>> +		int pending = 0;
>> +
>> +		mutex_lock(&nvmet_rdma_queue_mutex);
>> +		list_for_each_entry(q, &nvmet_rdma_queue_list, queue_list) {
>> +			if (q->state == NVMET_RDMA_Q_DISCONNECTING)
>> +				pending++;
>> +		}
>> +		mutex_unlock(&nvmet_rdma_queue_mutex);
>> +		/* Retry for pending controller teardown */
>> +		if (pending)
>> +			return NVME_SC_CONNECT_CTRL_BUSY;
> 
> Not sure if it's worth turning this into a helper since both patches do
> the same thing. Probably not, since you'd need to pass in the mutex and
> state too. In any case, why not just break if you hit
> NVMET_RDMA_Q_DISCONNECTING rather than keep looping? You don't care
> about the exact count, jsut whether it's non-zero or not.
> 
True, that's easier. Will be updating the patches.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-01 10:32 ` [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue() Hannes Reinecke
@ 2023-11-01 16:21   ` Jens Axboe
  2023-11-01 17:28     ` Hannes Reinecke
  0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2023-11-01 16:21 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme

On 11/1/23 4:32 AM, Hannes Reinecke wrote:
> nvmet_rdma_install_queue() is driven from the ->io_work workqueue
> function, but will call flush_workqueue() which might trigger
> ->release_work() which in itself calls flush_work on ->io_work.
> 
> To avoid that check for pending queue in disconnecting status,
> and return 'controller busy' until all disconnects are completed.
> 
> Signed-off-by: Hannes Reinecke <hare@suse.de>
> ---
>  drivers/nvme/target/rdma.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 4597bca43a6d..eaeb94a9e863 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -1583,8 +1583,18 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
>  	}
>  
>  	if (queue->host_qid == 0) {
> -		/* Let inflight controller teardown complete */
> -		flush_workqueue(nvmet_wq);
> +		struct nvmet_rdma_queue *q;
> +		int pending = 0;
> +
> +		mutex_lock(&nvmet_rdma_queue_mutex);
> +		list_for_each_entry(q, &nvmet_rdma_queue_list, queue_list) {
> +			if (q->state == NVMET_RDMA_Q_DISCONNECTING)
> +				pending++;
> +		}
> +		mutex_unlock(&nvmet_rdma_queue_mutex);
> +		/* Retry for pending controller teardown */
> +		if (pending)
> +			return NVME_SC_CONNECT_CTRL_BUSY;

Not sure if it's worth turning this into a helper since both patches do
the same thing. Probably not, since you'd need to pass in the mutex and
state too. In any case, why not just break if you hit
NVMET_RDMA_Q_DISCONNECTING rather than keep looping? You don't care
about the exact count, jsut whether it's non-zero or not.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue()
  2023-11-01 10:32 [PATCH 0/2] nvmet: avoid circular locking warning Hannes Reinecke
@ 2023-11-01 10:32 ` Hannes Reinecke
  2023-11-01 16:21   ` Jens Axboe
  0 siblings, 1 reply; 19+ messages in thread
From: Hannes Reinecke @ 2023-11-01 10:32 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke

nvmet_rdma_install_queue() is driven from the ->io_work workqueue
function, but will call flush_workqueue() which might trigger
->release_work() which in itself calls flush_work on ->io_work.

To avoid that check for pending queue in disconnecting status,
and return 'controller busy' until all disconnects are completed.

Signed-off-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/target/rdma.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 4597bca43a6d..eaeb94a9e863 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1583,8 +1583,18 @@ static int nvmet_rdma_queue_connect(struct rdma_cm_id *cm_id,
 	}
 
 	if (queue->host_qid == 0) {
-		/* Let inflight controller teardown complete */
-		flush_workqueue(nvmet_wq);
+		struct nvmet_rdma_queue *q;
+		int pending = 0;
+
+		mutex_lock(&nvmet_rdma_queue_mutex);
+		list_for_each_entry(q, &nvmet_rdma_queue_list, queue_list) {
+			if (q->state == NVMET_RDMA_Q_DISCONNECTING)
+				pending++;
+		}
+		mutex_unlock(&nvmet_rdma_queue_mutex);
+		/* Retry for pending controller teardown */
+		if (pending)
+			return NVME_SC_CONNECT_CTRL_BUSY;
 	}
 
 	ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
-- 
2.35.3



^ permalink raw reply related	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-12-07 12:17 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-02 14:19 [PATCHv2 0/2] nvmet: avoid circular locking warning Hannes Reinecke
2023-11-02 14:19 ` [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue() Hannes Reinecke
2023-11-03  8:23   ` Christoph Hellwig
2023-11-03  8:53     ` Hannes Reinecke
2023-11-03  9:19       ` Christoph Hellwig
2023-11-03 11:58         ` Hannes Reinecke
2023-11-03 14:05           ` Christoph Hellwig
2023-11-20 13:48             ` Sagi Grimberg
2023-12-04 10:19               ` Sagi Grimberg
2023-12-04 11:49                 ` Hannes Reinecke
2023-12-04 11:57                   ` Sagi Grimberg
2023-12-04 12:31                     ` Hannes Reinecke
2023-12-04 12:46                       ` Sagi Grimberg
2023-12-07  5:54                         ` Shinichiro Kawasaki
2023-12-07 12:17                           ` Sagi Grimberg
2023-11-02 14:19 ` [PATCH 2/2] nvmet-tcp: " Hannes Reinecke
  -- strict thread matches above, loose matches on Subject: below --
2023-11-01 10:32 [PATCH 0/2] nvmet: avoid circular locking warning Hannes Reinecke
2023-11-01 10:32 ` [PATCH 1/2] nvmet-rdma: avoid circular locking dependency on install_queue() Hannes Reinecke
2023-11-01 16:21   ` Jens Axboe
2023-11-01 17:28     ` Hannes Reinecke

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.