linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] nvmet-tcp: enable polling option in io_work
@ 2019-10-30 22:27 Wunderlich, Mark
  0 siblings, 0 replies; only message in thread
From: Wunderlich, Mark @ 2019-10-30 22:27 UTC (permalink / raw)
  To: linux-nvme

Move to a do/while loop terminate condition that is
time based and applicable for all modes of operation.
If the system is configured with busy polling the loop
period is set by the socket polling duration.  Allows
for full busy poll monitoring of recv/send activity
during this time period, not exiting if any loop has
no activity.

If busy polling not enabled, preserve default mode
behavior in that the do/while loop will exit early
if 'pending' is false indicating no activity during
that loop. Loop time period in this case defaults to
1000 usec, same as value used in io_work() for the
host side.

Outside loop add busy poll check if activity during
time period indicates more traffic may be 'pending'.

Re-queue the work item if any mode determines that
previous activity indicates there may be additional
'pending' work to process.

Signed-off-by: Sridhar Samudrala <sridhar.samudrala@intel.com>
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Mark Wunderlich <mark.wunderlich@intel.com>
---
 drivers/nvme/target/tcp.c |   26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index d535080..4f79b77 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -11,6 +11,7 @@
 #include <linux/nvme-tcp.h>
 #include <net/sock.h>
 #include <net/tcp.h>
+#include <net/busy_poll.h>
 #include <linux/inet.h>
 #include <linux/llist.h>
 #include <crypto/hash.h>
@@ -21,7 +22,6 @@
 
 #define NVMET_TCP_RECV_BUDGET		8
 #define NVMET_TCP_SEND_BUDGET		8
-#define NVMET_TCP_IO_WORK_BUDGET	64
 
 enum nvmet_tcp_send_state {
 	NVMET_TCP_SEND_DATA_PDU,
@@ -1162,8 +1162,15 @@ static void nvmet_tcp_io_work(struct work_struct *w)
 {
 	struct nvmet_tcp_queue *queue =
 		container_of(w, struct nvmet_tcp_queue, io_work);
-	bool pending;
+	bool pending, busy_poll = false;
 	int ret, ops = 0;
+	unsigned long deadline, bp_usec = 1000;
+
+	if (sk_can_busy_loop(queue->sock->sk)) {
+		busy_poll = true;
+		bp_usec = queue->sock->sk->sk_ll_usec;
+	}
+	deadline = jiffies + usecs_to_jiffies(bp_usec);
 
 	do {
 		pending = false;
@@ -1191,10 +1198,19 @@ static void nvmet_tcp_io_work(struct work_struct *w)
 			return;
 		}
 
-	} while (pending && ops < NVMET_TCP_IO_WORK_BUDGET);
+		if (!busy_poll && !pending)
+			break;
 
-	/*
-	 * We exahusted our budget, requeue our selves
+	} while (!time_after(jiffies, deadline));
+
+	/* If busy polling active, any ops completed during
+	 * poll loop period justifies more may be pending.
+	 */
+	if (busy_poll && ops > 0)
+		pending = true;
+
+	/* We exhausted our budget, requeue if pending indicates
+	 * potential of more to process.
 	 */
 	if (pending)
 		queue_work_on(queue->cpu, nvmet_tcp_wq, &queue->io_work);


_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2019-10-30 22:27 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-30 22:27 [PATCH 1/2] nvmet-tcp: enable polling option in io_work Wunderlich, Mark

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).