All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] small patchset for 5.2
@ 2019-04-24 18:53 Sagi Grimberg
  2019-04-24 18:53 ` [PATCH 1/4] nvmet-tcp: don't fail maxr2t greater than 1 Sagi Grimberg
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Sagi Grimberg @ 2019-04-24 18:53 UTC (permalink / raw)


3 bug fixes and a small cleanup.

Sagi Grimberg (4):
  nvmet-tcp: don't fail maxr2t greater than 1
  nvme-tcp: Fix a NULL deref when an admin connect times out
  nvme-rdma: Fix a NULL deref when an admin connect times out
  nvme-tcp: rename function to have nvme_tcp prefix

 drivers/nvme/host/rdma.c  | 10 ++++++----
 drivers/nvme/host/tcp.c   | 18 ++++++++++--------
 drivers/nvme/target/tcp.c |  6 ------
 3 files changed, 16 insertions(+), 18 deletions(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/4] nvmet-tcp: don't fail maxr2t greater than 1
  2019-04-24 18:53 [PATCH 0/4] small patchset for 5.2 Sagi Grimberg
@ 2019-04-24 18:53 ` Sagi Grimberg
  2019-04-24 18:53 ` [PATCH 2/4] nvme-tcp: Fix a NULL deref when an admin connect times out Sagi Grimberg
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Sagi Grimberg @ 2019-04-24 18:53 UTC (permalink / raw)


The host may support it, but nothing prevents us from
sending a single r2t at a time like we do anyways.

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/target/tcp.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 0a941abf56ec..02e9444565ea 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -774,12 +774,6 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
 		return -EPROTO;
 	}
 
-	if (icreq->maxr2t != 0) {
-		pr_err("queue %d: unsupported maxr2t %d\n", queue->idx,
-			le32_to_cpu(icreq->maxr2t) + 1);
-		return -EPROTO;
-	}
-
 	queue->hdr_digest = !!(icreq->digest & NVME_TCP_HDR_DIGEST_ENABLE);
 	queue->data_digest = !!(icreq->digest & NVME_TCP_DATA_DIGEST_ENABLE);
 	if (queue->hdr_digest || queue->data_digest) {
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/4] nvme-tcp: Fix a NULL deref when an admin connect times out
  2019-04-24 18:53 [PATCH 0/4] small patchset for 5.2 Sagi Grimberg
  2019-04-24 18:53 ` [PATCH 1/4] nvmet-tcp: don't fail maxr2t greater than 1 Sagi Grimberg
@ 2019-04-24 18:53 ` Sagi Grimberg
  2019-04-24 18:53 ` [PATCH 3/4] nvme-rdma: " Sagi Grimberg
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Sagi Grimberg @ 2019-04-24 18:53 UTC (permalink / raw)


If we timeout the admin startup sequence we might not yet have
an I/O tagset allocated which causes the teardown sequence to crash.
Make nvme_tcp_teardown_io_queues safe by not iterating inflight tags
if the tagset wasn't allocated.

Fixes: 39d57757467b ("nvme-tcp: fix timeout handler")
Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/tcp.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 5e761bf17ef2..7201fbeb4a8f 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -1719,7 +1719,9 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
 {
 	blk_mq_quiesce_queue(ctrl->admin_q);
 	nvme_tcp_stop_queue(ctrl, 0);
-	blk_mq_tagset_busy_iter(ctrl->admin_tagset, nvme_cancel_request, ctrl);
+	if (ctrl->admin_tagset)
+		blk_mq_tagset_busy_iter(ctrl->admin_tagset,
+			nvme_cancel_request, ctrl);
 	blk_mq_unquiesce_queue(ctrl->admin_q);
 	nvme_tcp_destroy_admin_queue(ctrl, remove);
 }
@@ -1731,7 +1733,9 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
 		return;
 	nvme_stop_queues(ctrl);
 	nvme_tcp_stop_io_queues(ctrl);
-	blk_mq_tagset_busy_iter(ctrl->tagset, nvme_cancel_request, ctrl);
+	if (ctrl->tagset)
+		blk_mq_tagset_busy_iter(ctrl->tagset,
+			nvme_cancel_request, ctrl);
 	if (remove)
 		nvme_start_queues(ctrl);
 	nvme_tcp_destroy_io_queues(ctrl, remove);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/4] nvme-rdma: Fix a NULL deref when an admin connect times out
  2019-04-24 18:53 [PATCH 0/4] small patchset for 5.2 Sagi Grimberg
  2019-04-24 18:53 ` [PATCH 1/4] nvmet-tcp: don't fail maxr2t greater than 1 Sagi Grimberg
  2019-04-24 18:53 ` [PATCH 2/4] nvme-tcp: Fix a NULL deref when an admin connect times out Sagi Grimberg
@ 2019-04-24 18:53 ` Sagi Grimberg
  2019-04-24 18:53 ` [PATCH 4/4] nvme-tcp: rename function to have nvme_tcp prefix Sagi Grimberg
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Sagi Grimberg @ 2019-04-24 18:53 UTC (permalink / raw)


If we timeout the admin startup sequence we might not yet have
an I/O tagset allocated which causes the teardown sequence to crash.
Make nvme_tcp_teardown_io_queues safe by not iterating inflight tags
if the tagset wasn't allocated.

Fixes: 4c174e636674 ("nvme-rdma: fix timeout handler")
Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/rdma.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 11a5ecae78c8..e1824c2e0a1c 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -914,8 +914,9 @@ static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
 {
 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
 	nvme_rdma_stop_queue(&ctrl->queues[0]);
-	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set, nvme_cancel_request,
-			&ctrl->ctrl);
+	if (ctrl->ctrl.admin_tagset)
+		blk_mq_tagset_busy_iter(ctrl->ctrl.admin_tagset,
+			nvme_cancel_request, &ctrl->ctrl);
 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
 	nvme_rdma_destroy_admin_queue(ctrl, remove);
 }
@@ -926,8 +927,9 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
 	if (ctrl->ctrl.queue_count > 1) {
 		nvme_stop_queues(&ctrl->ctrl);
 		nvme_rdma_stop_io_queues(ctrl);
-		blk_mq_tagset_busy_iter(&ctrl->tag_set, nvme_cancel_request,
-				&ctrl->ctrl);
+		if (ctrl->ctrl.tagset)
+			blk_mq_tagset_busy_iter(ctrl->ctrl.tagset,
+				nvme_cancel_request, &ctrl->ctrl);
 		if (remove)
 			nvme_start_queues(&ctrl->ctrl);
 		nvme_rdma_destroy_io_queues(ctrl, remove);
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] nvme-tcp: rename function to have nvme_tcp prefix
  2019-04-24 18:53 [PATCH 0/4] small patchset for 5.2 Sagi Grimberg
                   ` (2 preceding siblings ...)
  2019-04-24 18:53 ` [PATCH 3/4] nvme-rdma: " Sagi Grimberg
@ 2019-04-24 18:53 ` Sagi Grimberg
       [not found] ` <CGME20190424185359epcas4p1408cfaf1978df3ab0b156d1658902b78@epcms2p3>
  2019-04-25 14:53 ` [PATCH 0/4] small patchset for 5.2 Christoph Hellwig
  5 siblings, 0 replies; 7+ messages in thread
From: Sagi Grimberg @ 2019-04-24 18:53 UTC (permalink / raw)


usually nvme_ prefix is for core functions.
While we're cleaning up, remove redundant empty lines

Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
---
 drivers/nvme/host/tcp.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 7201fbeb4a8f..1a5217c85158 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -476,7 +476,6 @@ static int nvme_tcp_handle_c2h_data(struct nvme_tcp_queue *queue,
 	}
 
 	return 0;
-
 }
 
 static int nvme_tcp_handle_comp(struct nvme_tcp_queue *queue,
@@ -637,7 +636,6 @@ static inline void nvme_tcp_end_request(struct request *rq, u16 status)
 	nvme_end_request(rq, cpu_to_le16(status << 1), res);
 }
 
-
 static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb,
 			      unsigned int *offset, size_t *len)
 {
@@ -1543,7 +1541,7 @@ static int nvme_tcp_alloc_admin_queue(struct nvme_ctrl *ctrl)
 	return ret;
 }
 
-static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
+static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
 {
 	int i, ret;
 
@@ -1574,7 +1572,7 @@ static unsigned int nvme_tcp_nr_io_queues(struct nvme_ctrl *ctrl)
 	return nr_io_queues;
 }
 
-static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl)
+static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
 {
 	unsigned int nr_io_queues;
 	int ret;
@@ -1591,7 +1589,7 @@ static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl)
 	dev_info(ctrl->device,
 		"creating %d I/O queues.\n", nr_io_queues);
 
-	return nvme_tcp_alloc_io_queues(ctrl);
+	return __nvme_tcp_alloc_io_queues(ctrl);
 }
 
 static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove)
@@ -1608,7 +1606,7 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new)
 {
 	int ret;
 
-	ret = nvme_alloc_io_queues(ctrl);
+	ret = nvme_tcp_alloc_io_queues(ctrl);
 	if (ret)
 		return ret;
 
-- 
2.17.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/4] nvme-tcp: rename function to have nvme_tcp prefix
       [not found] ` <CGME20190424185359epcas4p1408cfaf1978df3ab0b156d1658902b78@epcms2p3>
@ 2019-04-25  6:51   ` Minwoo Im
  0 siblings, 0 replies; 7+ messages in thread
From: Minwoo Im @ 2019-04-25  6:51 UTC (permalink / raw)


It looks good to me.

Reviewed-by: Minwoo Im <minwoo.im at samsung.com>

> -----Original Message-----
> From: Linux-nvme [mailto:linux-nvme-bounces at lists.infradead.org] On Behalf
> Of Sagi Grimberg
> Sent: Thursday, April 25, 2019 3:53 AM
> To: Christoph Hellwig; Keith Busch
> Cc: linux-nvme at lists.infradead.org
> Subject: [PATCH 4/4] nvme-tcp: rename function to have nvme_tcp prefix
> 
> usually nvme_ prefix is for core functions.
> While we're cleaning up, remove redundant empty lines
> 
> Signed-off-by: Sagi Grimberg <sagi at grimberg.me>
> ---
>  drivers/nvme/host/tcp.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 7201fbeb4a8f..1a5217c85158 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -476,7 +476,6 @@ static int nvme_tcp_handle_c2h_data(struct
> nvme_tcp_queue *queue,
>  	}
> 
>  	return 0;
> -
>  }
> 
>  static int nvme_tcp_handle_comp(struct nvme_tcp_queue *queue,
> @@ -637,7 +636,6 @@ static inline void nvme_tcp_end_request(struct request
> *rq, u16 status)
>  	nvme_end_request(rq, cpu_to_le16(status << 1), res);
>  }
> 
> -
>  static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct
> sk_buff *skb,
>  			      unsigned int *offset, size_t *len)
>  {
> @@ -1543,7 +1541,7 @@ static int nvme_tcp_alloc_admin_queue(struct
> nvme_ctrl *ctrl)
>  	return ret;
>  }
> 
> -static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
> +static int __nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
>  {
>  	int i, ret;
> 
> @@ -1574,7 +1572,7 @@ static unsigned int nvme_tcp_nr_io_queues(struct
> nvme_ctrl *ctrl)
>  	return nr_io_queues;
>  }
> 
> -static int nvme_alloc_io_queues(struct nvme_ctrl *ctrl)
> +static int nvme_tcp_alloc_io_queues(struct nvme_ctrl *ctrl)
>  {
>  	unsigned int nr_io_queues;
>  	int ret;
> @@ -1591,7 +1589,7 @@ static int nvme_alloc_io_queues(struct nvme_ctrl
> *ctrl)
>  	dev_info(ctrl->device,
>  		"creating %d I/O queues.\n", nr_io_queues);
> 
> -	return nvme_tcp_alloc_io_queues(ctrl);
> +	return __nvme_tcp_alloc_io_queues(ctrl);
>  }
> 
>  static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool
> remove)
> @@ -1608,7 +1606,7 @@ static int nvme_tcp_configure_io_queues(struct
> nvme_ctrl *ctrl, bool new)
>  {
>  	int ret;
> 
> -	ret = nvme_alloc_io_queues(ctrl);
> +	ret = nvme_tcp_alloc_io_queues(ctrl);
>  	if (ret)
>  		return ret;
> 
> --
> 2.17.1
> 
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 0/4] small patchset for 5.2
  2019-04-24 18:53 [PATCH 0/4] small patchset for 5.2 Sagi Grimberg
                   ` (4 preceding siblings ...)
       [not found] ` <CGME20190424185359epcas4p1408cfaf1978df3ab0b156d1658902b78@epcms2p3>
@ 2019-04-25 14:53 ` Christoph Hellwig
  5 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2019-04-25 14:53 UTC (permalink / raw)


Thanks,

applied to nvme-5.2.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-04-25 14:53 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-24 18:53 [PATCH 0/4] small patchset for 5.2 Sagi Grimberg
2019-04-24 18:53 ` [PATCH 1/4] nvmet-tcp: don't fail maxr2t greater than 1 Sagi Grimberg
2019-04-24 18:53 ` [PATCH 2/4] nvme-tcp: Fix a NULL deref when an admin connect times out Sagi Grimberg
2019-04-24 18:53 ` [PATCH 3/4] nvme-rdma: " Sagi Grimberg
2019-04-24 18:53 ` [PATCH 4/4] nvme-tcp: rename function to have nvme_tcp prefix Sagi Grimberg
     [not found] ` <CGME20190424185359epcas4p1408cfaf1978df3ab0b156d1658902b78@epcms2p3>
2019-04-25  6:51   ` Minwoo Im
2019-04-25 14:53 ` [PATCH 0/4] small patchset for 5.2 Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.