stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] nvmet-rdma: fix null dereference under heavy load
@ 2019-01-03 17:35 Raju Rangoju
  2019-01-03 23:13 ` Sagi Grimberg
  2019-01-19 13:34 ` Christoph Hellwig
  0 siblings, 2 replies; 8+ messages in thread
From: Raju Rangoju @ 2019-01-03 17:35 UTC (permalink / raw)
  To: sagi, linux-nvme; +Cc: maxg, swise, rajur, stable

Under heavy load if we don't have any pre-allocated rsps left, we
dynamically allocate a rsp, but we are not actually allocating memory
for nvme_completion (rsp->req.rsp). In such a case, accessing pointer
fields (req->rsp->status) in nvmet_req_init() will result in crash.

To fix this, allocate the memory for nvme_completion by calling
nvmet_rdma_alloc_rsp()

Fixes: 8407879c("nvmet-rdma:fix possible bogus dereference under heavy load")

Cc: <stable@vger.kernel.org>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Raju Rangoju <rajur@chelsio.com>

--
Changes from v2:
- Dropped the unneeded initialization of ret variable

Changes from v1:
- Moved integer to 'if' block
- Used 'unlikely' in datapath flow condition
---
 drivers/nvme/target/rdma.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index a8d23eb80192..a884e3a0e8af 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -139,6 +139,10 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc);
 static void nvmet_rdma_qp_event(struct ib_event *event, void *priv);
 static void nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue);
+static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
+				struct nvmet_rdma_rsp *r);
+static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
+				struct nvmet_rdma_rsp *r);
 
 static const struct nvmet_fabrics_ops nvmet_rdma_ops;
 
@@ -182,9 +186,17 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue)
 	spin_unlock_irqrestore(&queue->rsps_lock, flags);
 
 	if (unlikely(!rsp)) {
-		rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
+		int ret;
+
+		rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
 		if (unlikely(!rsp))
 			return NULL;
+		ret = nvmet_rdma_alloc_rsp(queue->dev, rsp);
+		if (unlikely(ret)) {
+			kfree(rsp);
+			return NULL;
+		}
+
 		rsp->allocated = true;
 	}
 
@@ -197,6 +209,7 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp)
 	unsigned long flags;
 
 	if (unlikely(rsp->allocated)) {
+		nvmet_rdma_free_rsp(rsp->queue->dev, rsp);
 		kfree(rsp);
 		return;
 	}
-- 
2.12.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-01-21 15:46 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-03 17:35 [PATCH v3] nvmet-rdma: fix null dereference under heavy load Raju Rangoju
2019-01-03 23:13 ` Sagi Grimberg
2019-01-07 15:59   ` Steve Wise
2019-01-18  7:37     ` Raju  Rangoju
2019-01-19 13:34 ` Christoph Hellwig
2019-01-21  9:10   ` Sagi Grimberg
2019-01-21 13:07     ` Max Gurtovoy
2019-01-21 15:46     ` Steve Wise

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).