All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvmet-rdma: Fix double free of rdma queue
@ 2020-03-29 10:21 Israel Rukshin
  2020-03-30  4:36 ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Israel Rukshin @ 2020-03-29 10:21 UTC (permalink / raw)
  To: Linux-nvme, Sagi Grimberg, Christoph Hellwig
  Cc: Shlomi Nimrodi, Israel Rukshin, Max Gurtovoy

In case rdma accept fails at nvmet_rdma_queue_connect() release work is
scheduled. Later on, a new RDMA CM event may arrive since we didn't
destroy the cm-id and call nvmet_rdma_queue_connect_fail(), which schedule
another release work. This will cause calling nvmet_rdma_free_queue twice.
To fix this don't schedule the work from nvmet_rdma_queue_connect_fail()
when queue_list is empty (the queue is inserted to a list only after
successful rdma accept).

Signed-off-by: Israel Rukshin <israelr@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
---
 drivers/nvme/target/rdma.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 37d262a..59209e3 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1380,13 +1380,14 @@ static void nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
 {
 	WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);
 
+	pr_err("failed to connect queue %d\n", queue->idx);
+
 	mutex_lock(&nvmet_rdma_queue_mutex);
-	if (!list_empty(&queue->queue_list))
+	if (!list_empty(&queue->queue_list)) {
 		list_del_init(&queue->queue_list);
+		schedule_work(&queue->release_work);
+	}
 	mutex_unlock(&nvmet_rdma_queue_mutex);
-
-	pr_err("failed to connect queue %d\n", queue->idx);
-	schedule_work(&queue->release_work);
 }
 
 /**
-- 
1.8.3.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] nvmet-rdma: Fix double free of rdma queue
  2020-03-29 10:21 [PATCH] nvmet-rdma: Fix double free of rdma queue Israel Rukshin
@ 2020-03-30  4:36 ` Sagi Grimberg
  2020-03-30  8:22   ` Max Gurtovoy
  0 siblings, 1 reply; 8+ messages in thread
From: Sagi Grimberg @ 2020-03-30  4:36 UTC (permalink / raw)
  To: Israel Rukshin, Linux-nvme, Christoph Hellwig
  Cc: Shlomi Nimrodi, Max Gurtovoy


> In case rdma accept fails at nvmet_rdma_queue_connect() release work is
> scheduled. Later on, a new RDMA CM event may arrive since we didn't
> destroy the cm-id and call nvmet_rdma_queue_connect_fail(), which schedule
> another release work. This will cause calling nvmet_rdma_free_queue twice.
> To fix this don't schedule the work from nvmet_rdma_queue_connect_fail()
> when queue_list is empty (the queue is inserted to a list only after
> successful rdma accept).
> 
> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
> Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
> ---
>   drivers/nvme/target/rdma.c | 9 +++++----
>   1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 37d262a..59209e3 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -1380,13 +1380,14 @@ static void nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
>   {
>   	WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);
>   
> +	pr_err("failed to connect queue %d\n", queue->idx);
> +
>   	mutex_lock(&nvmet_rdma_queue_mutex);
> -	if (!list_empty(&queue->queue_list))
> +	if (!list_empty(&queue->queue_list)) {
>   		list_del_init(&queue->queue_list);
> +		schedule_work(&queue->release_work);

This has a hidden assumption that the connect handler already
scheduled the release.

Why don't we simply not queue the release_work in the accept
failure and return a negative status code to implicitly remove the
cm_id? this way we will never see any cm events and we don't
need to handle it.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] nvmet-rdma: Fix double free of rdma queue
  2020-03-30  4:36 ` Sagi Grimberg
@ 2020-03-30  8:22   ` Max Gurtovoy
  2020-03-30  8:56     ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Max Gurtovoy @ 2020-03-30  8:22 UTC (permalink / raw)
  To: Sagi Grimberg, Israel Rukshin, Linux-nvme, Christoph Hellwig
  Cc: Shlomi Nimrodi


On 3/30/2020 7:36 AM, Sagi Grimberg wrote:
>
>> In case rdma accept fails at nvmet_rdma_queue_connect() release work is
>> scheduled. Later on, a new RDMA CM event may arrive since we didn't
>> destroy the cm-id and call nvmet_rdma_queue_connect_fail(), which 
>> schedule
>> another release work. This will cause calling nvmet_rdma_free_queue 
>> twice.
>> To fix this don't schedule the work from nvmet_rdma_queue_connect_fail()
>> when queue_list is empty (the queue is inserted to a list only after
>> successful rdma accept).
>>
>> Signed-off-by: Israel Rukshin <israelr@mellanox.com>
>> Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
>> ---
>>   drivers/nvme/target/rdma.c | 9 +++++----
>>   1 file changed, 5 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index 37d262a..59209e3 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -1380,13 +1380,14 @@ static void 
>> nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
>>   {
>>       WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);
>>   +    pr_err("failed to connect queue %d\n", queue->idx);
>> +
>>       mutex_lock(&nvmet_rdma_queue_mutex);
>> -    if (!list_empty(&queue->queue_list))
>> +    if (!list_empty(&queue->queue_list)) {
>>           list_del_init(&queue->queue_list);
>> +        schedule_work(&queue->release_work);
>
> This has a hidden assumption that the connect handler already
> scheduled the release.
>
> Why don't we simply not queue the release_work in the accept
> failure and return a negative status code to implicitly remove the
> cm_id? this way we will never see any cm events and we don't
> need to handle it.

This changes the flow but I guess we can check this out.

But still, this flow can be called from 3 different events 
(RDMA_CM_EVENT_REJECTED, RDMA_CM_EVENT_UNREACHABLE, 
RDMA_CM_EVENT_CONNECT_ERROR) so I prefer to locate the schedule_work 
under the "if".



_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] nvmet-rdma: Fix double free of rdma queue
  2020-03-30  8:22   ` Max Gurtovoy
@ 2020-03-30  8:56     ` Sagi Grimberg
  2020-03-30  9:37       ` Israel Rukshin
  0 siblings, 1 reply; 8+ messages in thread
From: Sagi Grimberg @ 2020-03-30  8:56 UTC (permalink / raw)
  To: Max Gurtovoy, Israel Rukshin, Linux-nvme, Christoph Hellwig
  Cc: Shlomi Nimrodi


>>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>>> index 37d262a..59209e3 100644
>>> --- a/drivers/nvme/target/rdma.c
>>> +++ b/drivers/nvme/target/rdma.c
>>> @@ -1380,13 +1380,14 @@ static void 
>>> nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
>>>   {
>>>       WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);
>>>   +    pr_err("failed to connect queue %d\n", queue->idx);
>>> +
>>>       mutex_lock(&nvmet_rdma_queue_mutex);
>>> -    if (!list_empty(&queue->queue_list))
>>> +    if (!list_empty(&queue->queue_list)) {
>>>           list_del_init(&queue->queue_list);
>>> +        schedule_work(&queue->release_work);
>>
>> This has a hidden assumption that the connect handler already
>> scheduled the release.
>>
>> Why don't we simply not queue the release_work in the accept
>> failure and return a negative status code to implicitly remove the
>> cm_id? this way we will never see any cm events and we don't
>> need to handle it.
> 
> This changes the flow but I guess we can check this out.
> 
> But still, this flow can be called from 3 different events 
> (RDMA_CM_EVENT_REJECTED, RDMA_CM_EVENT_UNREACHABLE, 
> RDMA_CM_EVENT_CONNECT_ERROR) so I prefer to locate the schedule_work 
> under the "if".

This if is only checked in connect error. Anyway, if you feel that
this flow is racy, perhaps implement a proper serialization, instead
of checking a random "if" that makes the reader think why are they
even related.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] nvmet-rdma: Fix double free of rdma queue
  2020-03-30  8:56     ` Sagi Grimberg
@ 2020-03-30  9:37       ` Israel Rukshin
  2020-03-31  6:42         ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Israel Rukshin @ 2020-03-30  9:37 UTC (permalink / raw)
  To: Sagi Grimberg, Max Gurtovoy, Linux-nvme, Christoph Hellwig; +Cc: Shlomi Nimrodi

On 3/30/2020 11:56 AM, Sagi Grimberg wrote:
>
>>>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>>>> index 37d262a..59209e3 100644
>>>> --- a/drivers/nvme/target/rdma.c
>>>> +++ b/drivers/nvme/target/rdma.c
>>>> @@ -1380,13 +1380,14 @@ static void 
>>>> nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
>>>>   {
>>>>       WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);
>>>>   +    pr_err("failed to connect queue %d\n", queue->idx);
>>>> +
>>>>       mutex_lock(&nvmet_rdma_queue_mutex);
>>>> -    if (!list_empty(&queue->queue_list))
>>>> +    if (!list_empty(&queue->queue_list)) {
>>>>           list_del_init(&queue->queue_list);
>>>> +        schedule_work(&queue->release_work);
>>>
>>> This has a hidden assumption that the connect handler already
>>> scheduled the release.
>>>
>>> Why don't we simply not queue the release_work in the accept
>>> failure and return a negative status code to implicitly remove the
>>> cm_id? this way we will never see any cm events and we don't
>>> need to handle it.
>>
>> This changes the flow but I guess we can check this out.
>>
>> But still, this flow can be called from 3 different events 
>> (RDMA_CM_EVENT_REJECTED, RDMA_CM_EVENT_UNREACHABLE, 
>> RDMA_CM_EVENT_CONNECT_ERROR) so I prefer to locate the schedule_work 
>> under the "if".
>
> This if is only checked in connect error. Anyway, if you feel that
> this flow is racy, perhaps implement a proper serialization, instead
> of checking a random "if" that makes the reader think why are they
> even related.

This "if" is exactly like we are doing at nvmet_rdma_queue_disconnect().

All the other places before calling __nvmet_rdma_queue_disconnect() 
delete the queue from the list.

So I guess my change also protect us from races with 
nvmet_rdma_delete_ctrl/nvmet_rdma_remove_one.

Beside that, why do we need to check if the list is not empty before 
removing it from the list at nvmet_rdma_queue_connect_fail()?

I don't see a reason why to remove only the queue from the list without 
schedule the release work.


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] nvmet-rdma: Fix double free of rdma queue
  2020-03-30  9:37       ` Israel Rukshin
@ 2020-03-31  6:42         ` Sagi Grimberg
  2020-04-05 14:43           ` Israel Rukshin
  0 siblings, 1 reply; 8+ messages in thread
From: Sagi Grimberg @ 2020-03-31  6:42 UTC (permalink / raw)
  To: Israel Rukshin, Max Gurtovoy, Linux-nvme, Christoph Hellwig
  Cc: Shlomi Nimrodi



On 3/30/20 2:37 AM, Israel Rukshin wrote:
> On 3/30/2020 11:56 AM, Sagi Grimberg wrote:
>>
>>>>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>>>>> index 37d262a..59209e3 100644
>>>>> --- a/drivers/nvme/target/rdma.c
>>>>> +++ b/drivers/nvme/target/rdma.c
>>>>> @@ -1380,13 +1380,14 @@ static void 
>>>>> nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
>>>>>   {
>>>>>       WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);
>>>>>   +    pr_err("failed to connect queue %d\n", queue->idx);
>>>>> +
>>>>>       mutex_lock(&nvmet_rdma_queue_mutex);
>>>>> -    if (!list_empty(&queue->queue_list))
>>>>> +    if (!list_empty(&queue->queue_list)) {
>>>>>           list_del_init(&queue->queue_list);
>>>>> +        schedule_work(&queue->release_work);
>>>>
>>>> This has a hidden assumption that the connect handler already
>>>> scheduled the release.
>>>>
>>>> Why don't we simply not queue the release_work in the accept
>>>> failure and return a negative status code to implicitly remove the
>>>> cm_id? this way we will never see any cm events and we don't
>>>> need to handle it.
>>>
>>> This changes the flow but I guess we can check this out.
>>>
>>> But still, this flow can be called from 3 different events 
>>> (RDMA_CM_EVENT_REJECTED, RDMA_CM_EVENT_UNREACHABLE, 
>>> RDMA_CM_EVENT_CONNECT_ERROR) so I prefer to locate the schedule_work 
>>> under the "if".
>>
>> This if is only checked in connect error. Anyway, if you feel that
>> this flow is racy, perhaps implement a proper serialization, instead
>> of checking a random "if" that makes the reader think why are they
>> even related.
> 
> This "if" is exactly like we are doing at nvmet_rdma_queue_disconnect().

You're right.

> All the other places before calling __nvmet_rdma_queue_disconnect() 
> delete the queue from the list.
> 
> So I guess my change also protect us from races with 
> nvmet_rdma_delete_ctrl/nvmet_rdma_remove_one.
> 
> Beside that, why do we need to check if the list is not empty before 
> removing it from the list at nvmet_rdma_queue_connect_fail()?
> 
> I don't see a reason why to remove only the queue from the list without 
> schedule the release work.

That is fine with me, assuming we have a proper comment.

But if we take a step back, nvmet_rdma_create_queue_ib does not create
the cm_id, so why should destroy_queue_ib destroy it?

What if we made destroying the cm_id in release_work (out of 
nvmet_rdma_free_queue) and have the accept error path return a normal
negative ret to implicitly destroy the cm_id?

In a sense, that would make the behavior symmetric. Thoughts?

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] nvmet-rdma: Fix double free of rdma queue
  2020-03-31  6:42         ` Sagi Grimberg
@ 2020-04-05 14:43           ` Israel Rukshin
  2020-04-06  7:14             ` Sagi Grimberg
  0 siblings, 1 reply; 8+ messages in thread
From: Israel Rukshin @ 2020-04-05 14:43 UTC (permalink / raw)
  To: Sagi Grimberg, Max Gurtovoy, Linux-nvme, Christoph Hellwig; +Cc: Shlomi Nimrodi

On 3/31/2020 9:42 AM, Sagi Grimberg wrote:
>
>
> On 3/30/20 2:37 AM, Israel Rukshin wrote:
>> On 3/30/2020 11:56 AM, Sagi Grimberg wrote:
>>>
>>>>>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>>>>>> index 37d262a..59209e3 100644
>>>>>> --- a/drivers/nvme/target/rdma.c
>>>>>> +++ b/drivers/nvme/target/rdma.c
>>>>>> @@ -1380,13 +1380,14 @@ static void 
>>>>>> nvmet_rdma_queue_connect_fail(struct rdma_cm_id *cm_id,
>>>>>>   {
>>>>>>       WARN_ON_ONCE(queue->state != NVMET_RDMA_Q_CONNECTING);
>>>>>>   +    pr_err("failed to connect queue %d\n", queue->idx);
>>>>>> +
>>>>>>       mutex_lock(&nvmet_rdma_queue_mutex);
>>>>>> -    if (!list_empty(&queue->queue_list))
>>>>>> +    if (!list_empty(&queue->queue_list)) {
>>>>>>           list_del_init(&queue->queue_list);
>>>>>> +        schedule_work(&queue->release_work);
>>>>>
>>>>> This has a hidden assumption that the connect handler already
>>>>> scheduled the release.
>>>>>
>>>>> Why don't we simply not queue the release_work in the accept
>>>>> failure and return a negative status code to implicitly remove the
>>>>> cm_id? this way we will never see any cm events and we don't
>>>>> need to handle it.
>>>>
>>>> This changes the flow but I guess we can check this out.
>>>>
>>>> But still, this flow can be called from 3 different events 
>>>> (RDMA_CM_EVENT_REJECTED, RDMA_CM_EVENT_UNREACHABLE, 
>>>> RDMA_CM_EVENT_CONNECT_ERROR) so I prefer to locate the 
>>>> schedule_work under the "if".
>>>
>>> This if is only checked in connect error. Anyway, if you feel that
>>> this flow is racy, perhaps implement a proper serialization, instead
>>> of checking a random "if" that makes the reader think why are they
>>> even related.
>>
>> This "if" is exactly like we are doing at nvmet_rdma_queue_disconnect().
>
> You're right.
>
>> All the other places before calling __nvmet_rdma_queue_disconnect() 
>> delete the queue from the list.
>>
>> So I guess my change also protect us from races with 
>> nvmet_rdma_delete_ctrl/nvmet_rdma_remove_one.
>>
>> Beside that, why do we need to check if the list is not empty before 
>> removing it from the list at nvmet_rdma_queue_connect_fail()?
>>
>> I don't see a reason why to remove only the queue from the list 
>> without schedule the release work.
>
> That is fine with me, assuming we have a proper comment.
>
> But if we take a step back, nvmet_rdma_create_queue_ib does not create
> the cm_id, so why should destroy_queue_ib destroy it?
>
This is because we can't destroy the QP before destroying the cm_id.

You can look at the following commit from ib_isert: "19e2090 
iser-target: Fix connected_handler + teardown flow race"

In order to avoid freeing nvmet rdma queues while handling rdma_cm 
events, we destroy the qp and the queue after

destroying the cm_id which guarantees that all rdma_cm events are done.

> What if we made destroying the cm_id in release_work (out of 
> nvmet_rdma_free_queue) and have the accept error path return a normal
> negative ret to implicitly destroy the cm_id?
We can't destroy the cm_id at release_work after calling to 
nvmet_rdma_free_queue (I am not sure that we can destroy it before 
calling nvmet_rdma_free_queue).
>
> In a sense, that would make the behavior symmetric. Thoughts?
So, we can't make it symmetric.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] nvmet-rdma: Fix double free of rdma queue
  2020-04-05 14:43           ` Israel Rukshin
@ 2020-04-06  7:14             ` Sagi Grimberg
  0 siblings, 0 replies; 8+ messages in thread
From: Sagi Grimberg @ 2020-04-06  7:14 UTC (permalink / raw)
  To: Israel Rukshin, Max Gurtovoy, Linux-nvme, Christoph Hellwig
  Cc: Shlomi Nimrodi


>> But if we take a step back, nvmet_rdma_create_queue_ib does not create
>> the cm_id, so why should destroy_queue_ib destroy it?
>>
> This is because we can't destroy the QP before destroying the cm_id.

I meant something like:
--
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index fd71cfe5c5d6..89fd37b1140e 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1069,7 +1069,8 @@ static void nvmet_rdma_destroy_queue_ib(struct 
nvmet_rdma_queue *queue)
         struct ib_qp *qp = queue->cm_id->qp;

         ib_drain_qp(qp);
-       rdma_destroy_id(queue->cm_id);
+       if (queue->cm_id)
+               rdma_destroy_id(queue->cm_id);
         ib_destroy_qp(qp);
         ib_free_cq(queue->cq);
  }
@@ -1079,7 +1080,6 @@ static void nvmet_rdma_free_queue(struct 
nvmet_rdma_queue *queue)
         pr_debug("freeing queue %d\n", queue->idx);

         nvmet_sq_destroy(&queue->nvme_sq);
-
         nvmet_rdma_destroy_queue_ib(queue);
         if (!queue->dev->srq) {
                 nvmet_rdma_free_cmds(queue->dev, queue->cmds,
@@ -1305,9 +1305,12 @@ static int nvmet_rdma_queue_connect(struct 
rdma_cm_id *cm_id,

         ret = nvmet_rdma_cm_accept(cm_id, queue, &event->param.conn);
         if (ret) {
-               schedule_work(&queue->release_work);
-               /* Destroying rdma_cm id is not needed here */
-               return 0;
+               /*
+                * don't destroy the cm_id in free path, as we implicitly
+                * destroy the cm_id here with non-zero ret code.
+                */
+               queue->cm_id = NULL;
+               goto free_queue;
         }

         mutex_lock(&nvmet_rdma_queue_mutex);
@@ -1316,9 +1319,10 @@ static int nvmet_rdma_queue_connect(struct 
rdma_cm_id *cm_id,

         return 0;

+free_queue:
+       nvmet_rdma_free_queue(queue);
  put_device:
         kref_put(&ndev->ref, nvmet_rdma_free_dev);
-
         return ret;
  }
--

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-04-06  7:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-29 10:21 [PATCH] nvmet-rdma: Fix double free of rdma queue Israel Rukshin
2020-03-30  4:36 ` Sagi Grimberg
2020-03-30  8:22   ` Max Gurtovoy
2020-03-30  8:56     ` Sagi Grimberg
2020-03-30  9:37       ` Israel Rukshin
2020-03-31  6:42         ` Sagi Grimberg
2020-04-05 14:43           ` Israel Rukshin
2020-04-06  7:14             ` Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.