All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] nvmet-tcp: set MSG_MORE only if we actually have more to send
@ 2020-03-12  6:26 Sagi Grimberg
  2020-03-12  6:26 ` [PATCH 2/2] nvmet-tcp: optimize tcp stack TX when data digest is used Sagi Grimberg
  0 siblings, 1 reply; 4+ messages in thread
From: Sagi Grimberg @ 2020-03-12  6:26 UTC (permalink / raw)
  To: linux-nvme; +Cc: Keith Busch, Mark Wunderlich, Christoph Hellwig

When we send PDU data, we want to optimize the tcp stack
operation if we have more data to send. So when we set MSG_MORE
when:
- We have more fragments coming in the batch, or
- We have a more data to send in this PDU
- We don't have a data digest trailer
- We optimize with the SUCCESS flag and omit the NVMe completion
  (used if sq_head pointer update is disabled)

This addresses a regression in QD=1 with SUCCESS flag optimization
as we unconditionally set MSG_MORE when we didn't actually have
more data to send.

Fixes: 70583295388a ("nvmet-tcp: implement C2HData SUCCESS optimization")
Reported-by: Mark Wunderlich <mark.wunderlich@intel.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/tcp.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 1f4322ee7dbe..e5d000f12059 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -525,7 +525,7 @@ static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)
 	return 1;
 }
 
-static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd)
+static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
 {
 	struct nvmet_tcp_queue *queue = cmd->queue;
 	int ret;
@@ -533,9 +533,15 @@ static int nvmet_try_send_data(struct nvmet_tcp_cmd *cmd)
 	while (cmd->cur_sg) {
 		struct page *page = sg_page(cmd->cur_sg);
 		u32 left = cmd->cur_sg->length - cmd->offset;
+		int flags = MSG_DONTWAIT;
+
+		if ((!last_in_batch && cmd->queue->send_list_len) ||
+		    cmd->wbytes_done + left < cmd->req.transfer_len ||
+		    queue->data_digest || !queue->nvme_sq.sqhd_disabled)
+			flags |= MSG_MORE;
 
 		ret = kernel_sendpage(cmd->queue->sock, page, cmd->offset,
-					left, MSG_DONTWAIT | MSG_MORE);
+					left, flags);
 		if (ret <= 0)
 			return ret;
 
@@ -666,7 +672,7 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
 	}
 
 	if (cmd->state == NVMET_TCP_SEND_DATA) {
-		ret = nvmet_try_send_data(cmd);
+		ret = nvmet_try_send_data(cmd, last_in_batch);
 		if (ret <= 0)
 			goto done_send;
 	}
-- 
2.20.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] nvmet-tcp: optimize tcp stack TX when data digest is used
  2020-03-12  6:26 [PATCH 1/2] nvmet-tcp: set MSG_MORE only if we actually have more to send Sagi Grimberg
@ 2020-03-12  6:26 ` Sagi Grimberg
  2020-03-12 22:49   ` Wunderlich, Mark
  0 siblings, 1 reply; 4+ messages in thread
From: Sagi Grimberg @ 2020-03-12  6:26 UTC (permalink / raw)
  To: linux-nvme; +Cc: Keith Busch, Mark Wunderlich, Christoph Hellwig

If we have a 4-byte data digest to send to the wire, but we
have more data to send, set MSG_MORE to tell the stack
that more is coming.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/target/tcp.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index e5d000f12059..372739221c50 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -628,7 +628,7 @@ static int nvmet_try_send_r2t(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
 	return 1;
 }
 
-static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd)
+static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd, bool last_in_batch)
 {
 	struct nvmet_tcp_queue *queue = cmd->queue;
 	struct msghdr msg = { .msg_flags = MSG_DONTWAIT };
@@ -638,6 +638,9 @@ static int nvmet_try_send_ddgst(struct nvmet_tcp_cmd *cmd)
 	};
 	int ret;
 
+	if (!last_in_batch && cmd->queue->send_list_len)
+		flags |= MSG_MORE;
+
 	ret = kernel_sendmsg(queue->sock, &msg, &iov, 1, iov.iov_len);
 	if (unlikely(ret <= 0))
 		return ret;
@@ -678,7 +681,7 @@ static int nvmet_tcp_try_send_one(struct nvmet_tcp_queue *queue,
 	}
 
 	if (cmd->state == NVMET_TCP_SEND_DDGST) {
-		ret = nvmet_try_send_ddgst(cmd);
+		ret = nvmet_try_send_ddgst(cmd, last_in_batch);
 		if (ret <= 0)
 			goto done_send;
 	}
-- 
2.20.1


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* RE: [PATCH 2/2] nvmet-tcp: optimize tcp stack TX when data digest is used
  2020-03-12  6:26 ` [PATCH 2/2] nvmet-tcp: optimize tcp stack TX when data digest is used Sagi Grimberg
@ 2020-03-12 22:49   ` Wunderlich, Mark
  2020-03-12 23:05     ` Sagi Grimberg
  0 siblings, 1 reply; 4+ messages in thread
From: Wunderlich, Mark @ 2020-03-12 22:49 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme; +Cc: Keith Busch, Christoph Hellwig


>+	if (!last_in_batch && cmd->queue->send_list_len)
>+		flags |= MSG_MORE;

Should this actually be:  msg.msg_flags |= MSG_MORE;
>+

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 2/2] nvmet-tcp: optimize tcp stack TX when data digest is used
  2020-03-12 22:49   ` Wunderlich, Mark
@ 2020-03-12 23:05     ` Sagi Grimberg
  0 siblings, 0 replies; 4+ messages in thread
From: Sagi Grimberg @ 2020-03-12 23:05 UTC (permalink / raw)
  To: Wunderlich, Mark, linux-nvme; +Cc: Keith Busch, Christoph Hellwig


>> +	if (!last_in_batch && cmd->queue->send_list_len)
>> +		flags |= MSG_MORE;
> 
> Should this actually be:  msg.msg_flags |= MSG_MORE;

You're right, this was left uncommited in my branch...

Will send a v2, thanks

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-03-12 23:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-12  6:26 [PATCH 1/2] nvmet-tcp: set MSG_MORE only if we actually have more to send Sagi Grimberg
2020-03-12  6:26 ` [PATCH 2/2] nvmet-tcp: optimize tcp stack TX when data digest is used Sagi Grimberg
2020-03-12 22:49   ` Wunderlich, Mark
2020-03-12 23:05     ` Sagi Grimberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.