All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
@ 2018-06-14 12:37 Israel Rukshin
  2018-06-15  8:01 ` Christoph Hellwig
  0 siblings, 1 reply; 12+ messages in thread
From: Israel Rukshin @ 2018-06-14 12:37 UTC (permalink / raw)


On error calling to nvme_rdma_free_queue() with admin queue
frees the QE before it was allocated.

Signed-off-by: Israel Rukshin <israelr at mellanox.com>
Reviewed-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/nvme/host/rdma.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 7b3f084..8786779 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -536,6 +536,15 @@ static int nvme_rdma_alloc_queue(struct nvme_rdma_ctrl *ctrl,
 		goto out_destroy_cm_id;
 	}
 
+	if (idx == 0) {
+		ret = nvme_rdma_alloc_qe(queue->device->dev,
+					 &ctrl->async_event_sqe,
+					 sizeof(struct nvme_command),
+					 DMA_TO_DEVICE);
+		if (ret)
+			goto out_destroy_cm_id;
+	}
+
 	set_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags);
 
 	return 0;
@@ -795,12 +804,6 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
 	if (error)
 		goto out_stop_queue;
 
-	error = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev,
-			&ctrl->async_event_sqe, sizeof(struct nvme_command),
-			DMA_TO_DEVICE);
-	if (error)
-		goto out_stop_queue;
-
 	return 0;
 
 out_stop_queue:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-14 12:37 [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue() Israel Rukshin
@ 2018-06-15  8:01 ` Christoph Hellwig
  2018-06-17 10:52   ` Max Gurtovoy
  0 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2018-06-15  8:01 UTC (permalink / raw)


On Thu, Jun 14, 2018@12:37:44PM +0000, Israel Rukshin wrote:
> On error calling to nvme_rdma_free_queue() with admin queue
> frees the QE before it was allocated.
> 
> Signed-off-by: Israel Rukshin <israelr at mellanox.com>
> Reviewed-by: Max Gurtovoy <maxg at mellanox.com>

This certainly matches what we do in nvme_rdma_free_queue, but it
looks like a little too much special casing vodoo to me.

The nvme_rdma_free_queue was added by Sagi last October to fix
a reconnect double free.  But I wonder if we just need to move it
back instead and have a different double free protection.  E.g.
in nvme_rdma_free_qe just check if qe->data is set first,
and the NULL it out when freeing.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-15  8:01 ` Christoph Hellwig
@ 2018-06-17 10:52   ` Max Gurtovoy
  2018-06-19  5:41     ` Christoph Hellwig
  0 siblings, 1 reply; 12+ messages in thread
From: Max Gurtovoy @ 2018-06-17 10:52 UTC (permalink / raw)




On 6/15/2018 11:01 AM, Christoph Hellwig wrote:
> On Thu, Jun 14, 2018@12:37:44PM +0000, Israel Rukshin wrote:
>> On error calling to nvme_rdma_free_queue() with admin queue
>> frees the QE before it was allocated.
>>
>> Signed-off-by: Israel Rukshin <israelr at mellanox.com>
>> Reviewed-by: Max Gurtovoy <maxg at mellanox.com>
> 
> This certainly matches what we do in nvme_rdma_free_queue, but it
> looks like a little too much special casing vodoo to me.
> 
> The nvme_rdma_free_queue was added by Sagi last October to fix
> a reconnect double free.  But I wonder if we just need to move it
> back instead and have a different double free protection.  E.g.
> in nvme_rdma_free_qe just check if qe->data is set first,
> and the NULL it out when freeing.
> 

I guess we can but this is not enough since we'll need to move the call 
to nvme_rdma_free_qe to be after stopping the queue. I actually think 
that making it symetrical is the right way to go (we also have a flag 
NVME_RDMA_Q_ALLOCATED that helps here) but I guess reverting Sagi's 
commit + some fix will work too.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-17 10:52   ` Max Gurtovoy
@ 2018-06-19  5:41     ` Christoph Hellwig
  2018-06-19 10:36       ` Sagi Grimberg
  2018-06-19 11:57       ` Max Gurtovoy
  0 siblings, 2 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-06-19  5:41 UTC (permalink / raw)


On Sun, Jun 17, 2018@01:52:55PM +0300, Max Gurtovoy wrote:
>> back instead and have a different double free protection.  E.g.
>> in nvme_rdma_free_qe just check if qe->data is set first,
>> and the NULL it out when freeing.
>>
>
> I guess we can but this is not enough since we'll need to move the call to 
> nvme_rdma_free_qe to be after stopping the queue. I actually think that 
> making it symetrical is the right way to go (we also have a flag 
> NVME_RDMA_Q_ALLOCATED that helps here) but I guess reverting Sagi's commit 
> + some fix will work too.

Yes, it should be symetrical either way.  What is the story with
NVME_RDMA_Q_ALLOCATED?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19  5:41     ` Christoph Hellwig
@ 2018-06-19 10:36       ` Sagi Grimberg
  2018-06-19 12:09         ` Max Gurtovoy
  2018-06-19 11:57       ` Max Gurtovoy
  1 sibling, 1 reply; 12+ messages in thread
From: Sagi Grimberg @ 2018-06-19 10:36 UTC (permalink / raw)



>>> back instead and have a different double free protection.  E.g.
>>> in nvme_rdma_free_qe just check if qe->data is set first,
>>> and the NULL it out when freeing.
>>>
>>
>> I guess we can but this is not enough since we'll need to move the call to
>> nvme_rdma_free_qe to be after stopping the queue. I actually think that
>> making it symetrical is the right way to go (we also have a flag
>> NVME_RDMA_Q_ALLOCATED that helps here) but I guess reverting Sagi's commit
>> + some fix will work too.

I don't think this is the way to go. And I don't think that reverting it
is either the way to go.

The commit message said:
--
     nvme-rdma: Fix possible double free in reconnect flow

     The fact that we free the async event buffer in
     nvme_rdma_destroy_admin_queue can cause us to free it
     more than once because this happens in every reconnect
     attempt since commit 31fdf1840170. we rely on the queue
     state flags DELETING to avoid this for other resources.

     A more complete fix is to not destroy the admin/io queues
     unconditionally on every reconnect attempt, but its a bit
     more extensive and will go in the next release.
--

Today, we don't destroy the admin queue on every reconnect so
I think we are OK with restoring it back, but looking at the code
some more is needed.

I think that the async buffer needs to be allocated right after
nvme_alloc_rdma_queue and to be free right before nvme_rdma_free_queue.

Israel, does something like something like [1] work?

> Yes, it should be symetrical either way.  What is the story with
> NVME_RDMA_Q_ALLOCATED?

This was just to flip the logic to check positive vs. !negative like
we do for LIVE.. (done in 5013e98b5e8db50144e8f1ca5a96aed95d4d48a0)



[1]:
--
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index c9424da0d23e..787917963137 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -560,12 +560,6 @@ static void nvme_rdma_free_queue(struct 
nvme_rdma_queue *queue)
         if (!test_and_clear_bit(NVME_RDMA_Q_ALLOCATED, &queue->flags))
                 return;

-       if (nvme_rdma_queue_idx(queue) == 0) {
-               nvme_rdma_free_qe(queue->device->dev,
-                       &queue->ctrl->async_event_sqe,
-                       sizeof(struct nvme_command), DMA_TO_DEVICE);
-       }
-
         nvme_rdma_destroy_queue_ib(queue);
         rdma_destroy_id(queue->cm_id);
  }
@@ -739,6 +733,8 @@ static void nvme_rdma_destroy_admin_queue(struct 
nvme_rdma_ctrl *ctrl,
                 blk_cleanup_queue(ctrl->ctrl.admin_q);
                 nvme_rdma_free_tagset(&ctrl->ctrl, 
ctrl->ctrl.admin_tagset);
         }
+       nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe,
+               sizeof(struct nvme_command), DMA_TO_DEVICE);
         nvme_rdma_free_queue(&ctrl->queues[0]);
  }

@@ -755,11 +751,16 @@ static int nvme_rdma_configure_admin_queue(struct 
nvme_rdma_ctrl *ctrl,

         ctrl->max_fr_pages = nvme_rdma_get_max_fr_pages(ctrl->device->dev);

+       error = nvme_rdma_alloc_qe(ctrl->device->dev, 
&ctrl->async_event_sqe,
+                       sizeof(struct nvme_command), DMA_TO_DEVICE);
+       if (error)
+               goto out_free_queue;
+
         if (new) {
                 ctrl->ctrl.admin_tagset = 
nvme_rdma_alloc_tagset(&ctrl->ctrl, true);
                 if (IS_ERR(ctrl->ctrl.admin_tagset)) {
                         error = PTR_ERR(ctrl->ctrl.admin_tagset);
-                       goto out_free_queue;
+                       goto out_free_async_qe;
                 }

                 ctrl->ctrl.admin_q = 
blk_mq_init_queue(&ctrl->admin_tag_set);
@@ -795,12 +796,6 @@ static int nvme_rdma_configure_admin_queue(struct 
nvme_rdma_ctrl *ctrl,
         if (error)
                 goto out_stop_queue;

-       error = nvme_rdma_alloc_qe(ctrl->queues[0].device->dev,
-                       &ctrl->async_event_sqe, sizeof(struct nvme_command),
-                       DMA_TO_DEVICE);
-       if (error)
-               goto out_stop_queue;
-
         return 0;

  out_stop_queue:
@@ -811,6 +806,9 @@ static int nvme_rdma_configure_admin_queue(struct 
nvme_rdma_ctrl *ctrl,
  out_free_tagset:
         if (new)
                 nvme_rdma_free_tagset(&ctrl->ctrl, 
ctrl->ctrl.admin_tagset);
+out_free_async_qe:
+       nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe,
+               sizeof(struct nvme_command), DMA_TO_DEVICE);
  out_free_queue:
         nvme_rdma_free_queue(&ctrl->queues[0]);
         return error;
--

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19  5:41     ` Christoph Hellwig
  2018-06-19 10:36       ` Sagi Grimberg
@ 2018-06-19 11:57       ` Max Gurtovoy
  1 sibling, 0 replies; 12+ messages in thread
From: Max Gurtovoy @ 2018-06-19 11:57 UTC (permalink / raw)




On 6/19/2018 8:41 AM, Christoph Hellwig wrote:
> On Sun, Jun 17, 2018@01:52:55PM +0300, Max Gurtovoy wrote:
>>> back instead and have a different double free protection.  E.g.
>>> in nvme_rdma_free_qe just check if qe->data is set first,
>>> and the NULL it out when freeing.
>>>
>>
>> I guess we can but this is not enough since we'll need to move the call to
>> nvme_rdma_free_qe to be after stopping the queue. I actually think that
>> making it symetrical is the right way to go (we also have a flag
>> NVME_RDMA_Q_ALLOCATED that helps here) but I guess reverting Sagi's commit
>> + some fix will work too.
> 
> Yes, it should be symetrical either way.  What is the story with
> NVME_RDMA_Q_ALLOCATED?
> 

No real story, just saying that this patch will guarenty that 
allocating/releasing the qe will be guarded by the NVME_RDMA_Q_ALLOCATED 
bit.

In order to make it symetrical and right in the second way, we'll need 
to allocate it before nvme_rdma_start_queue and free it after 
nvme_rdma_stop_queue.

since we have some issues and WIP regarding the location of 
nvme_rdma_stop_queue I think we should take the first way for now (the 
way that proposed in this patch).

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19 10:36       ` Sagi Grimberg
@ 2018-06-19 12:09         ` Max Gurtovoy
  2018-06-19 12:30           ` Sagi Grimberg
  0 siblings, 1 reply; 12+ messages in thread
From: Max Gurtovoy @ 2018-06-19 12:09 UTC (permalink / raw)




On 6/19/2018 1:36 PM, Sagi Grimberg wrote:
> 
>>>> back instead and have a different double free protection.? E.g.
>>>> in nvme_rdma_free_qe just check if qe->data is set first,
>>>> and the NULL it out when freeing.
>>>>
>>>
>>> I guess we can but this is not enough since we'll need to move the 
>>> call to
>>> nvme_rdma_free_qe to be after stopping the queue. I actually think that
>>> making it symetrical is the right way to go (we also have a flag
>>> NVME_RDMA_Q_ALLOCATED that helps here) but I guess reverting Sagi's 
>>> commit
>>> + some fix will work too.
> 
> I don't think this is the way to go. And I don't think that reverting it
> is either the way to go.
> 
> The commit message said:
> -- 
>  ??? nvme-rdma: Fix possible double free in reconnect flow
> 
>  ??? The fact that we free the async event buffer in
>  ??? nvme_rdma_destroy_admin_queue can cause us to free it
>  ??? more than once because this happens in every reconnect
>  ??? attempt since commit 31fdf1840170. we rely on the queue
>  ??? state flags DELETING to avoid this for other resources.
> 
>  ??? A more complete fix is to not destroy the admin/io queues
>  ??? unconditionally on every reconnect attempt, but its a bit
>  ??? more extensive and will go in the next release.
> -- 
> 
> Today, we don't destroy the admin queue on every reconnect so
> I think we are OK with restoring it back, but looking at the code
> some more is needed.
> 
> I think that the async buffer needs to be allocated right after
> nvme_alloc_rdma_queue and to be free right before nvme_rdma_free_queue.

why ? we call nvme_start_ctrl after configuring the admin queue. and we 
free the async buffer after we drain the QP.

am I missing something ?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19 12:09         ` Max Gurtovoy
@ 2018-06-19 12:30           ` Sagi Grimberg
  2018-06-19 16:08             ` Max Gurtovoy
  0 siblings, 1 reply; 12+ messages in thread
From: Sagi Grimberg @ 2018-06-19 12:30 UTC (permalink / raw)



>> I think that the async buffer needs to be allocated right after
>> nvme_alloc_rdma_queue and to be free right before nvme_rdma_free_queue.
> 
> why ? we call nvme_start_ctrl after configuring the admin queue. and we 
> free the async buffer after we drain the QP.
> 
> am I missing something ?

If I'm not mistaken (as the change log wasn't clear enough), the issue
was that in reset we fail to to configure the admin queue  (before async
event buffer was allocated) and then call reconnect_or_remove which will
eventually call nvme_rdma_destroy_admin_queue (freeing the async event
buffer).

Allocating the async event buffer before nvme_rdma_start_queue should
guarantee that if we failed in nvme_rdma_start_queue we won't free the
buffer, and if we failed after, we will.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19 12:30           ` Sagi Grimberg
@ 2018-06-19 16:08             ` Max Gurtovoy
  2018-06-19 16:12               ` Sagi Grimberg
  0 siblings, 1 reply; 12+ messages in thread
From: Max Gurtovoy @ 2018-06-19 16:08 UTC (permalink / raw)




On 6/19/2018 3:30 PM, Sagi Grimberg wrote:
> 
>>> I think that the async buffer needs to be allocated right after
>>> nvme_alloc_rdma_queue and to be free right before nvme_rdma_free_queue.
>>
>> why ? we call nvme_start_ctrl after configuring the admin queue. and 
>> we free the async buffer after we drain the QP.
>>
>> am I missing something ?
> 
> If I'm not mistaken (as the change log wasn't clear enough), the issue
> was that in reset we fail to to configure the admin queue? (before async
> event buffer was allocated) and then call reconnect_or_remove which will
> eventually call nvme_rdma_destroy_admin_queue (freeing the async event
> buffer).
> 

No this is no the issue. The issue is that in case of failure during 
nvme_rdma_configure_admin_queue we go to error flow that frees the non 
allocated buffer. This is happening because the allocation/free are not 
symetrical.
The patch fixes that.

> Allocatning the async event buffer before nvme_rdma_start_queue should
> guarantee that if we failed in nvme_rdma_start_queue we won't free the
> buffer, and if we failed after, we will.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19 16:08             ` Max Gurtovoy
@ 2018-06-19 16:12               ` Sagi Grimberg
  2018-06-19 16:21                 ` Max Gurtovoy
  0 siblings, 1 reply; 12+ messages in thread
From: Sagi Grimberg @ 2018-06-19 16:12 UTC (permalink / raw)



>>>> I think that the async buffer needs to be allocated right after
>>>> nvme_alloc_rdma_queue and to be free right before nvme_rdma_free_queue.
>>>
>>> why ? we call nvme_start_ctrl after configuring the admin queue. and 
>>> we free the async buffer after we drain the QP.
>>>
>>> am I missing something ?
>>
>> If I'm not mistaken (as the change log wasn't clear enough), the issue
>> was that in reset we fail to to configure the admin queue? (before async
>> event buffer was allocated) and then call reconnect_or_remove which will
>> eventually call nvme_rdma_destroy_admin_queue (freeing the async event
>> buffer).
>>
> 
> No this is no the issue. The issue is that in case of failure during 
> nvme_rdma_configure_admin_queue we go to error flow that frees the non 
> allocated buffer. This is happening because the allocation/free are not 
> symetrical.
> The patch fixes that.

But it not addresses the issue where reset_ctrl fails, for that you need
for the async event buffer to be allocated before the queue is started.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19 16:12               ` Sagi Grimberg
@ 2018-06-19 16:21                 ` Max Gurtovoy
  2018-06-19 16:38                   ` Sagi Grimberg
  0 siblings, 1 reply; 12+ messages in thread
From: Max Gurtovoy @ 2018-06-19 16:21 UTC (permalink / raw)




On 6/19/2018 7:12 PM, Sagi Grimberg wrote:
> 
>>>>> I think that the async buffer needs to be allocated right after
>>>>> nvme_alloc_rdma_queue and to be free right before 
>>>>> nvme_rdma_free_queue.
>>>>
>>>> why ? we call nvme_start_ctrl after configuring the admin queue. and 
>>>> we free the async buffer after we drain the QP.
>>>>
>>>> am I missing something ?
>>>
>>> If I'm not mistaken (as the change log wasn't clear enough), the issue
>>> was that in reset we fail to to configure the admin queue? (before async
>>> event buffer was allocated) and then call reconnect_or_remove which will
>>> eventually call nvme_rdma_destroy_admin_queue (freeing the async event
>>> buffer).
>>>
>>
>> No this is no the issue. The issue is that in case of failure during 
>> nvme_rdma_configure_admin_queue we go to error flow that frees the non 
>> allocated buffer. This is happening because the allocation/free are 
>> not symetrical.
>> The patch fixes that.
> 
> But it not addresses the issue where reset_ctrl fails, for that you need
> for the async event buffer to be allocated before the queue is started.

fails where exactly ?
I wasn't talking about it, but I've sent another patch yesterday to fix 
reset error flow (please review):

---
  drivers/nvme/host/rdma.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index cef24ad..c193f61 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1781,7 +1781,7 @@ static void nvme_rdma_reset_ctrl_work(struct 
work_struct *work)
  	if (ctrl->ctrl.queue_count > 1) {
  		ret = nvme_rdma_configure_io_queues(ctrl, false);
  		if (ret)
-			goto out_fail;
+			goto destroy_admin;
  	}

  	changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
@@ -1795,6 +1795,8 @@ static void nvme_rdma_reset_ctrl_work(struct 
work_struct *work)

  	return;

+destroy_admin:
+	nvme_rdma_destroy_admin_queue(ctrl, false);
  out_fail:
  	++ctrl->ctrl.nr_reconnects;
  	nvme_rdma_reconnect_or_remove(ctrl);
-- 

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue()
  2018-06-19 16:21                 ` Max Gurtovoy
@ 2018-06-19 16:38                   ` Sagi Grimberg
  0 siblings, 0 replies; 12+ messages in thread
From: Sagi Grimberg @ 2018-06-19 16:38 UTC (permalink / raw)



>>>>>> I think that the async buffer needs to be allocated right after
>>>>>> nvme_alloc_rdma_queue and to be free right before 
>>>>>> nvme_rdma_free_queue.
>>>>>
>>>>> why ? we call nvme_start_ctrl after configuring the admin queue. 
>>>>> and we free the async buffer after we drain the QP.
>>>>>
>>>>> am I missing something ?
>>>>
>>>> If I'm not mistaken (as the change log wasn't clear enough), the issue
>>>> was that in reset we fail to to configure the admin queue? (before 
>>>> async
>>>> event buffer was allocated) and then call reconnect_or_remove which 
>>>> will
>>>> eventually call nvme_rdma_destroy_admin_queue (freeing the async event
>>>> buffer).
>>>>
>>>
>>> No this is no the issue. The issue is that in case of failure during 
>>> nvme_rdma_configure_admin_queue we go to error flow that frees the 
>>> non allocated buffer. This is happening because the allocation/free 
>>> are not symetrical.
>>> The patch fixes that.
>>
>> But it not addresses the issue where reset_ctrl fails, for that you need
>> for the async event buffer to be allocated before the queue is started.
> 
> fails where exactly ?

Where I described. Anyway, the point is I think we should avoid the
magic "if qid == 0" inside the generic nvme_rdma_alloc/free_queue
approach. Anything specific to admin should move out to
nvme_configure/destroy_admin_queue.

What I sent should fix that, so if that resolves the issue, I prefer
to have it this way. If not, I'd like to understand why it *needs* to
be in nvme_rdma_alloc/free_queue.

> I wasn't talking about it, but I've sent another patch yesterday to fix 
> reset error flow (please review):

This is a good fix, its also incorporated in the centralization patch I
sent out. I can rebase on top of it once I collect some feedback as its
stable material.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-06-19 16:38 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-14 12:37 [PATCH] nvme: Fix error flow at nvme_rdma_configure_admin_queue() Israel Rukshin
2018-06-15  8:01 ` Christoph Hellwig
2018-06-17 10:52   ` Max Gurtovoy
2018-06-19  5:41     ` Christoph Hellwig
2018-06-19 10:36       ` Sagi Grimberg
2018-06-19 12:09         ` Max Gurtovoy
2018-06-19 12:30           ` Sagi Grimberg
2018-06-19 16:08             ` Max Gurtovoy
2018-06-19 16:12               ` Sagi Grimberg
2018-06-19 16:21                 ` Max Gurtovoy
2018-06-19 16:38                   ` Sagi Grimberg
2018-06-19 11:57       ` Max Gurtovoy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.