linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] nvme-rdma: handle nvme completion data length
@ 2020-10-25 11:51 zhenwei pi
  2020-10-26  7:40 ` Sagi Grimberg
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: zhenwei pi @ 2020-10-25 11:51 UTC (permalink / raw)
  To: kbusch, axboe, hch, sagi; +Cc: pizhenwei, linux-kernel, linux-nvme, lengchao

Hit a kernel warning:
refcount_t: underflow; use-after-free.
WARNING: CPU: 0 PID: 0 at lib/refcount.c:28

RIP: 0010:refcount_warn_saturate+0xd9/0xe0
Call Trace:
 <IRQ>
 nvme_rdma_recv_done+0xf3/0x280 [nvme_rdma]
 __ib_process_cq+0x76/0x150 [ib_core]
 ...

The reason is that a zero bytes message received from target, and the
host side continues to process without length checking, then the
previous CQE is processed twice.

Do sanity check on received data length, try to recovery for corrupted
CQE case.

Because zero bytes message in not defined in spec, using zero bytes
message to detect dead connections on transport layer is not
standard, currently still treat it as illegal.

Thanks to Chao Leng & Sagi for suggestions.

Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
---
 drivers/nvme/host/rdma.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index aad829a2b50d..40a0a3b6476c 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -1768,6 +1768,14 @@ static void nvme_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
 		return;
 	}
 
+	/* sanity checking for received data length */
+	if (unlikely(wc->byte_len < len)) {
+		dev_err(queue->ctrl->ctrl.device,
+			"Unexpected nvme completion length(%d)\n", wc->byte_len);
+		nvme_rdma_error_recovery(queue->ctrl);
+		return;
+	}
+
 	ib_dma_sync_single_for_cpu(ibdev, qe->dma, len, DMA_FROM_DEVICE);
 	/*
 	 * AEN requests are special as they don't time out and can
-- 
2.11.0


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] nvme-rdma: handle nvme completion data length
  2020-10-25 11:51 [PATCH v3] nvme-rdma: handle nvme completion data length zhenwei pi
@ 2020-10-26  7:40 ` Sagi Grimberg
  2020-10-27  9:07 ` Christoph Hellwig
  2020-10-28 16:58 ` Max Gurtovoy
  2 siblings, 0 replies; 4+ messages in thread
From: Sagi Grimberg @ 2020-10-26  7:40 UTC (permalink / raw)
  To: zhenwei pi, kbusch, axboe, hch; +Cc: linux-kernel, linux-nvme, lengchao

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] nvme-rdma: handle nvme completion data length
  2020-10-25 11:51 [PATCH v3] nvme-rdma: handle nvme completion data length zhenwei pi
  2020-10-26  7:40 ` Sagi Grimberg
@ 2020-10-27  9:07 ` Christoph Hellwig
  2020-10-28 16:58 ` Max Gurtovoy
  2 siblings, 0 replies; 4+ messages in thread
From: Christoph Hellwig @ 2020-10-27  9:07 UTC (permalink / raw)
  To: zhenwei pi; +Cc: sagi, linux-kernel, linux-nvme, axboe, lengchao, kbusch, hch

Thanks,

applied to nvme-5.10.

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] nvme-rdma: handle nvme completion data length
  2020-10-25 11:51 [PATCH v3] nvme-rdma: handle nvme completion data length zhenwei pi
  2020-10-26  7:40 ` Sagi Grimberg
  2020-10-27  9:07 ` Christoph Hellwig
@ 2020-10-28 16:58 ` Max Gurtovoy
  2 siblings, 0 replies; 4+ messages in thread
From: Max Gurtovoy @ 2020-10-28 16:58 UTC (permalink / raw)
  To: zhenwei pi, kbusch, axboe, hch, sagi; +Cc: linux-kernel, linux-nvme, lengchao


On 10/25/2020 1:51 PM, zhenwei pi wrote:
> Hit a kernel warning:
> refcount_t: underflow; use-after-free.
> WARNING: CPU: 0 PID: 0 at lib/refcount.c:28
>
> RIP: 0010:refcount_warn_saturate+0xd9/0xe0
> Call Trace:
>   <IRQ>
>   nvme_rdma_recv_done+0xf3/0x280 [nvme_rdma]
>   __ib_process_cq+0x76/0x150 [ib_core]
>   ...
>
> The reason is that a zero bytes message received from target, and the
> host side continues to process without length checking, then the
> previous CQE is processed twice.
>
> Do sanity check on received data length, try to recovery for corrupted
> CQE case.
>
> Because zero bytes message in not defined in spec, using zero bytes
> message to detect dead connections on transport layer is not
> standard, currently still treat it as illegal.
>
> Thanks to Chao Leng & Sagi for suggestions.
>
> Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
> ---
>   drivers/nvme/host/rdma.c | 8 ++++++++
>   1 file changed, 8 insertions(+)
>
Seems strange that the targets sends zero byte packets.

Can you specify which target is this and the scenario ?


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-10-28 16:58 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-25 11:51 [PATCH v3] nvme-rdma: handle nvme completion data length zhenwei pi
2020-10-26  7:40 ` Sagi Grimberg
2020-10-27  9:07 ` Christoph Hellwig
2020-10-28 16:58 ` Max Gurtovoy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).