All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/3] io_uring: add napi busy polling support
@ 2022-11-21 19:14 Stefan Roesch
  2022-11-21 19:14 ` [PATCH v5 1/3] " Stefan Roesch
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Stefan Roesch @ 2022-11-21 19:14 UTC (permalink / raw)
  To: kernel-team; +Cc: shr, axboe, olivier, netdev, io-uring, kuba

This adds the napi busy polling support in io_uring.c. It adds a new
napi_list to the io_ring_ctx structure. This list contains the list of
napi_id's that are currently enabled for busy polling. This list is
used to determine which napi id's enabled busy polling.

io-uring allows specifying two parameters:
- busy poll timeout and
- prefer busy poll to call of io_napi_busy_loop()
This sets the above parameters for the ring. The settings are passed
with a new structure io_uring_napi.

There is also a corresponding liburing patch series, which enables this
feature. The name of the series is "liburing: add add api for napi busy
poll timeout". It also contains two programs to test the this.

Testing has shown that the round-trip times are reduced to 38us from
55us by enabling napi busy polling with a busy poll timeout of 100us.
More detailled results are part of the commit message of the first
patch.


Changes:
- V5:
  - Refreshed to 6.1-rc6
  - Use copy_from_user instead of memdup/kfree
  - Removed the moving of napi_busy_poll_to
  - Return -EINVAL if any of the reserved or padded fields are not 0.
- V4:
  - Pass structure for napi config, instead of individual parameters
- V3:
  - Refreshed to 6.1-rc5
  - Added a new io-uring api for the prefer napi busy poll api and wire
    it to io_napi_busy_loop().
  - Removed the unregister (implemented as register)
  - Added more performance results to the first commit message.
- V2:
  - Add missing defines if CONFIG_NET_RX_BUSY_POLL is not defined
  - Changes signature of function io_napi_add_list to static inline
    if CONFIG_NET_RX_BUSY_POLL is not defined
  - define some functions as static


Signed-off-by: Stefan Roesch <shr@devkernel.io>
Acked-by: Jakub Kicinski <kuba@kernel.org>


Stefan Roesch (3):
  io_uring: add napi busy polling support
  io_uring: add api to set / get napi configuration.
  io_uring: add api to set napi prefer busy poll

 include/linux/io_uring_types.h |   8 +
 include/uapi/linux/io_uring.h  |  12 ++
 io_uring/io_uring.c            | 302 +++++++++++++++++++++++++++++++++
 io_uring/napi.h                |  22 +++
 io_uring/poll.c                |   3 +
 io_uring/sqpoll.c              |  10 ++
 6 files changed, 357 insertions(+)
 create mode 100644 io_uring/napi.h


base-commit: eb7081409f94a9a8608593d0fb63a1aa3d6f95d8
-- 
2.30.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v5 1/3] io_uring: add napi busy polling support
  2022-11-21 19:14 [PATCH v5 0/3] io_uring: add napi busy polling support Stefan Roesch
@ 2022-11-21 19:14 ` Stefan Roesch
  2022-11-21 19:45   ` Jens Axboe
  2022-11-21 19:14 ` [PATCH v5 2/3] io_uring: add api to set / get napi configuration Stefan Roesch
  2022-11-21 19:14 ` [PATCH v5 3/3] io_uring: add api to set napi prefer busy poll Stefan Roesch
  2 siblings, 1 reply; 11+ messages in thread
From: Stefan Roesch @ 2022-11-21 19:14 UTC (permalink / raw)
  To: kernel-team; +Cc: shr, axboe, olivier, netdev, io-uring, kuba

This adds the napi busy polling support in io_uring.c. It adds a new
napi_list to the io_ring_ctx structure. This list contains the list of
napi_id's that are currently enabled for busy polling. The list is
synchronized by the new napi_lock spin lock. The current default napi
busy polling time is stored in napi_busy_poll_to. If napi busy polling
is not enabled, the value is 0.

The NAPI_TIMEOUT is stored as a timeout to make sure that the time a
napi entry is stored in the napi list is limited.

The busy poll timeout is also stored as part of the io_wait_queue. This
is necessary as for sq polling the poll interval needs to be adjusted
and the napi callback allows only to pass in one value.

This has been tested with two simple programs from the liburing library
repository: the napi client and the napi server program. The client
sends a request, which has a timestamp in its payload and the server
replies with the same payload. The client calculates the roundtrip time
and stores it to calcualte the results.

The client is running on host1 and the server is running on host 2 (in
the same rack). The measured times below are roundtrip times. They are
average times over 5 runs each. Each run measures 1 million roundtrips.

                   no rx coal          rx coal: frames=88,usecs=33
Default              57us                    56us

client_poll=100us    47us                    46us

server_poll=100us    51us                    46us

client_poll=100us+   40us                    40us
server_poll=100us

client_poll=100us+   41us                    39us
server_poll=100us+
prefer napi busy poll on client

client_poll=100us+   41us                    39us
server_poll=100us+
prefer napi busy poll on server

client_poll=100us+   41us                    39us
server_poll=100us+
prefer napi busy poll on client + server

Signed-off-by: Stefan Roesch <shr@devkernel.io>
Suggested-by: Olivier Langlois <olivier@trillion01.com>
---
 include/linux/io_uring_types.h |   8 ++
 io_uring/io_uring.c            | 245 +++++++++++++++++++++++++++++++++
 io_uring/napi.h                |  22 +++
 io_uring/poll.c                |   3 +
 io_uring/sqpoll.c              |  10 ++
 5 files changed, 288 insertions(+)
 create mode 100644 io_uring/napi.h

diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index f5b687a787a3..23993b5d3186 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -270,6 +270,14 @@ struct io_ring_ctx {
 	struct xarray		personalities;
 	u32			pers_next;
 
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	struct list_head	napi_list;	/* track busy poll napi_id */
+	spinlock_t		napi_lock;	/* napi_list lock */
+
+	unsigned int		napi_busy_poll_to; /* napi busy poll default timeout */
+	bool			napi_prefer_busy_poll;
+#endif
+
 	struct {
 		/*
 		 * We cache a range of free CQEs we can use, once exhausted it
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 8840cf3e20f2..6d7c05fa3c80 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -90,6 +90,7 @@
 #include "rsrc.h"
 #include "cancel.h"
 #include "net.h"
+#include "napi.h"
 #include "notif.h"
 
 #include "timeout.h"
@@ -332,6 +333,14 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	INIT_WQ_LIST(&ctx->locked_free_list);
 	INIT_DELAYED_WORK(&ctx->fallback_work, io_fallback_req_func);
 	INIT_WQ_LIST(&ctx->submit_state.compl_reqs);
+
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	INIT_LIST_HEAD(&ctx->napi_list);
+	spin_lock_init(&ctx->napi_lock);
+	ctx->napi_prefer_busy_poll = false;
+	ctx->napi_busy_poll_to = READ_ONCE(sysctl_net_busy_poll);
+#endif
+
 	return ctx;
 err:
 	kfree(ctx->dummy_ubuf);
@@ -2308,6 +2317,11 @@ struct io_wait_queue {
 	struct io_ring_ctx *ctx;
 	unsigned cq_tail;
 	unsigned nr_timeouts;
+
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	unsigned int napi_busy_poll_to;
+	bool napi_prefer_busy_poll;
+#endif
 };
 
 static inline bool io_has_work(struct io_ring_ctx *ctx)
@@ -2381,6 +2395,200 @@ static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
 	return 1;
 }
 
+#ifdef CONFIG_NET_RX_BUSY_POLL
+#define NAPI_TIMEOUT		(60 * SEC_CONVERSION)
+
+struct io_napi_entry {
+	struct list_head	list;
+	unsigned int		napi_id;
+	unsigned long		timeout;
+};
+
+static bool io_napi_busy_loop_on(struct io_ring_ctx *ctx)
+{
+	return READ_ONCE(ctx->napi_busy_poll_to);
+}
+
+/*
+ * io_napi_add() - Add napi id to the busy poll list
+ * @file: file pointer for socket
+ * @ctx:  io-uring context
+ *
+ * Add the napi id of the socket to the napi busy poll list.
+ */
+void io_napi_add(struct file *file, struct io_ring_ctx *ctx)
+{
+	unsigned int napi_id;
+	struct socket *sock;
+	struct sock *sk;
+	struct io_napi_entry *ne;
+
+	if (!io_napi_busy_loop_on(ctx))
+		return;
+
+	sock = sock_from_file(file);
+	if (!sock)
+		return;
+
+	sk = sock->sk;
+	if (!sk)
+		return;
+
+	napi_id = READ_ONCE(sk->sk_napi_id);
+
+	/* Non-NAPI IDs can be rejected */
+	if (napi_id < MIN_NAPI_ID)
+		return;
+
+	spin_lock(&ctx->napi_lock);
+	list_for_each_entry(ne, &ctx->napi_list, list) {
+		if (ne->napi_id == napi_id) {
+			ne->timeout = jiffies + NAPI_TIMEOUT;
+			goto out;
+		}
+	}
+
+	ne = kmalloc(sizeof(*ne), GFP_NOWAIT);
+	if (!ne)
+		goto out;
+
+	ne->napi_id = napi_id;
+	ne->timeout = jiffies + NAPI_TIMEOUT;
+	list_add_tail(&ne->list, &ctx->napi_list);
+
+out:
+	spin_unlock(&ctx->napi_lock);
+}
+
+static void io_napi_free_list(struct io_ring_ctx *ctx)
+{
+	spin_lock(&ctx->napi_lock);
+	while (!list_empty(&ctx->napi_list)) {
+		struct io_napi_entry *ne =
+			list_first_entry(&ctx->napi_list,
+					struct io_napi_entry, list);
+
+		list_del(&ne->list);
+		kfree(ne);
+	}
+	spin_unlock(&ctx->napi_lock);
+}
+
+static void io_napi_adjust_busy_loop_timeout(unsigned int poll_to,
+					     struct timespec64 *ts,
+					     unsigned int *new_poll_to)
+{
+	struct timespec64 pollto = ns_to_timespec64(1000 * (s64)poll_to);
+
+	if (timespec64_compare(ts, &pollto) > 0) {
+		*ts = timespec64_sub(*ts, pollto);
+		*new_poll_to = poll_to;
+	} else {
+		u64 to = timespec64_to_ns(ts);
+
+		do_div(to, 1000);
+		*new_poll_to = to;
+		ts->tv_sec = 0;
+		ts->tv_nsec = 0;
+	}
+}
+
+static inline bool io_napi_busy_loop_timeout(unsigned long start_time,
+					     unsigned long bp_usec)
+{
+	if (bp_usec) {
+		unsigned long end_time = start_time + bp_usec;
+		unsigned long now = busy_loop_current_time();
+
+		return time_after(now, end_time);
+	}
+	return true;
+}
+
+static inline void io_napi_check_entry_timeout(struct io_napi_entry *ne)
+{
+	if (time_after(jiffies, ne->timeout)) {
+		list_del(&ne->list);
+		kfree(ne);
+	}
+}
+
+/*
+ * io_napi_busy_loop() - napi busy poll loop
+ * @napi_list            : list of napi_id's supporting busy polling
+ * @napi_prefer_busy_poll: prefer napi busy polling
+ *
+ * This invokes the napi busy poll loop if sockets have been added to the
+ * napi busy poll list.
+ *
+ * Returns if all napi-id's in the list have been processed.
+ */
+bool io_napi_busy_loop(struct list_head *napi_list, bool prefer_busy_poll)
+{
+	struct io_napi_entry *ne;
+	struct io_napi_entry *n;
+
+	list_for_each_entry_safe(ne, n, napi_list, list) {
+		napi_busy_loop(ne->napi_id, NULL, NULL, prefer_busy_poll,
+			       BUSY_POLL_BUDGET);
+		io_napi_check_entry_timeout(ne);
+	}
+
+	return !list_empty(napi_list);
+}
+
+static bool io_napi_busy_loop_end(void *p, unsigned long start_time)
+{
+	struct io_wait_queue *iowq = p;
+
+	return signal_pending(current) ||
+	       io_should_wake(iowq) ||
+	       io_napi_busy_loop_timeout(start_time, iowq->napi_busy_poll_to);
+}
+
+static void io_napi_blocking_busy_loop(struct list_head *napi_list,
+				       struct io_wait_queue *iowq)
+{
+	unsigned long start_time = list_is_singular(napi_list)
+					? 0
+					: busy_loop_current_time();
+
+	do {
+		if (list_is_singular(napi_list)) {
+			struct io_napi_entry *ne =
+				list_first_entry(napi_list,
+						 struct io_napi_entry, list);
+
+			napi_busy_loop(ne->napi_id, io_napi_busy_loop_end, iowq,
+				       iowq->napi_prefer_busy_poll, BUSY_POLL_BUDGET);
+			io_napi_check_entry_timeout(ne);
+			break;
+		}
+	} while (io_napi_busy_loop(napi_list, iowq->napi_prefer_busy_poll) &&
+		 !io_napi_busy_loop_end(iowq, start_time));
+}
+
+static void io_napi_putback_list(struct io_ring_ctx *ctx,
+				 struct list_head *napi_list)
+{
+	struct io_napi_entry *cne;
+	struct io_napi_entry *lne;
+
+	spin_lock(&ctx->napi_lock);
+	list_for_each_entry(cne, &ctx->napi_list, list) {
+		list_for_each_entry(lne, napi_list, list) {
+			if (cne->napi_id == lne->napi_id) {
+				list_del(&lne->list);
+				kfree(lne);
+				break;
+			}
+		}
+	}
+	list_splice(napi_list, &ctx->napi_list);
+	spin_unlock(&ctx->napi_lock);
+}
+#endif
+
 /*
  * Wait until events become available, if we don't already have some. The
  * application must reap them itself, as they reside on the shared cq ring.
@@ -2393,6 +2601,9 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 	struct io_rings *rings = ctx->rings;
 	ktime_t timeout = KTIME_MAX;
 	int ret;
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	LIST_HEAD(local_napi_list);
+#endif
 
 	if (!io_allowed_run_tw(ctx))
 		return -EEXIST;
@@ -2422,12 +2633,34 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 			return ret;
 	}
 
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	iowq.napi_busy_poll_to = 0;
+	iowq.napi_prefer_busy_poll = READ_ONCE(ctx->napi_prefer_busy_poll);
+
+	if (!(ctx->flags & IORING_SETUP_SQPOLL)) {
+		spin_lock(&ctx->napi_lock);
+		list_splice_init(&ctx->napi_list, &local_napi_list);
+		spin_unlock(&ctx->napi_lock);
+	}
+#endif
+
 	if (uts) {
 		struct timespec64 ts;
 
 		if (get_timespec64(&ts, uts))
 			return -EFAULT;
+
+#ifdef CONFIG_NET_RX_BUSY_POLL
+		if (!list_empty(&local_napi_list)) {
+			io_napi_adjust_busy_loop_timeout(READ_ONCE(ctx->napi_busy_poll_to),
+						&ts, &iowq.napi_busy_poll_to);
+		}
+#endif
 		timeout = ktime_add_ns(timespec64_to_ktime(ts), ktime_get_ns());
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	} else if (!list_empty(&local_napi_list)) {
+		iowq.napi_busy_poll_to = READ_ONCE(ctx->napi_busy_poll_to);
+#endif
 	}
 
 	init_waitqueue_func_entry(&iowq.wq, io_wake_function);
@@ -2438,6 +2671,15 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events,
 	iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events;
 
 	trace_io_uring_cqring_wait(ctx, min_events);
+
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	if (iowq.napi_busy_poll_to)
+		io_napi_blocking_busy_loop(&local_napi_list, &iowq);
+
+	if (!list_empty(&local_napi_list))
+		io_napi_putback_list(ctx, &local_napi_list);
+#endif
+
 	do {
 		/* if we can't even flush overflow, don't wait for more */
 		if (!io_cqring_overflow_flush(ctx)) {
@@ -2637,6 +2879,9 @@ static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx)
 	io_req_caches_free(ctx);
 	if (ctx->hash_map)
 		io_wq_put_hash(ctx->hash_map);
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	io_napi_free_list(ctx);
+#endif
 	kfree(ctx->cancel_table.hbs);
 	kfree(ctx->cancel_table_locked.hbs);
 	kfree(ctx->dummy_ubuf);
diff --git a/io_uring/napi.h b/io_uring/napi.h
new file mode 100644
index 000000000000..11602d679629
--- /dev/null
+++ b/io_uring/napi.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef IOU_NAPI_H
+#define IOU_NAPI_H
+
+#include <linux/kernel.h>
+#include <linux/io_uring.h>
+#include <net/busy_poll.h>
+
+#ifdef CONFIG_NET_RX_BUSY_POLL
+
+void io_napi_add(struct file *file, struct io_ring_ctx *ctx);
+bool io_napi_busy_loop(struct list_head *napi_list, bool prefer_busy_poll);
+
+#else
+
+static inline void io_napi_add(struct file *file, struct io_ring_ctx *ctx)
+{
+}
+
+#endif
+#endif
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 055632e9092a..20d16b71f7db 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -15,6 +15,7 @@
 
 #include "io_uring.h"
 #include "refs.h"
+#include "napi.h"
 #include "opdef.h"
 #include "kbuf.h"
 #include "poll.h"
@@ -259,6 +260,7 @@ static int io_poll_check_events(struct io_kiocb *req, bool *locked)
 				io_req_set_res(req, mask, 0);
 				return IOU_POLL_REMOVE_POLL_USE_RES;
 			}
+			io_napi_add(req->file, ctx);
 		} else {
 			ret = io_poll_issue(req, locked);
 			if (ret == IOU_STOP_MULTISHOT)
@@ -583,6 +585,7 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
 		__io_poll_execute(req, mask);
 		return 0;
 	}
+	io_napi_add(req->file, req->ctx);
 
 	if (ipt->owning) {
 		/*
diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
index 559652380672..6f50bc156038 100644
--- a/io_uring/sqpoll.c
+++ b/io_uring/sqpoll.c
@@ -15,6 +15,7 @@
 #include <uapi/linux/io_uring.h>
 
 #include "io_uring.h"
+#include "napi.h"
 #include "sqpoll.h"
 
 #define IORING_SQPOLL_CAP_ENTRIES_VALUE 8
@@ -193,6 +194,15 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, bool cap_entries)
 			ret = io_submit_sqes(ctx, to_submit);
 		mutex_unlock(&ctx->uring_lock);
 
+#ifdef CONFIG_NET_RX_BUSY_POLL
+		spin_lock(&ctx->napi_lock);
+		if (!list_empty(&ctx->napi_list) &&
+		    READ_ONCE(ctx->napi_busy_poll_to) > 0 &&
+		    io_napi_busy_loop(&ctx->napi_list, ctx->napi_prefer_busy_poll))
+			++ret;
+		spin_unlock(&ctx->napi_lock);
+#endif
+
 		if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait))
 			wake_up(&ctx->sqo_sq_wait);
 		if (creds)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 2/3] io_uring: add api to set / get napi configuration.
  2022-11-21 19:14 [PATCH v5 0/3] io_uring: add napi busy polling support Stefan Roesch
  2022-11-21 19:14 ` [PATCH v5 1/3] " Stefan Roesch
@ 2022-11-21 19:14 ` Stefan Roesch
  2022-11-21 19:46   ` Jens Axboe
  2022-11-25 21:43   ` Ammar Faizi
  2022-11-21 19:14 ` [PATCH v5 3/3] io_uring: add api to set napi prefer busy poll Stefan Roesch
  2 siblings, 2 replies; 11+ messages in thread
From: Stefan Roesch @ 2022-11-21 19:14 UTC (permalink / raw)
  To: kernel-team; +Cc: shr, axboe, olivier, netdev, io-uring, kuba

This adds an api to register the busy poll timeout from liburing. To be
able to use this functionality, the corresponding liburing patch is needed.

Signed-off-by: Stefan Roesch <shr@devkernel.io>
---
 include/uapi/linux/io_uring.h | 11 ++++++++
 io_uring/io_uring.c           | 53 +++++++++++++++++++++++++++++++++++
 2 files changed, 64 insertions(+)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 2df3225b562f..1a713bbafaee 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -490,6 +490,10 @@ enum {
 	/* register a range of fixed file slots for automatic slot allocation */
 	IORING_REGISTER_FILE_ALLOC_RANGE	= 25,
 
+	/* set/clear busy poll settings */
+	IORING_REGISTER_NAPI			= 26,
+	IORING_UNREGISTER_NAPI			= 27,
+
 	/* this goes last */
 	IORING_REGISTER_LAST
 };
@@ -612,6 +616,13 @@ struct io_uring_buf_reg {
 	__u64	resv[3];
 };
 
+/* argument for IORING_(UN)REGISTER_NAPI */
+struct io_uring_napi {
+	__u32	busy_poll_to;
+	__u32	pad;
+	__u64	resv;
+};
+
 /*
  * io_uring_restriction->opcode values
  */
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 6d7c05fa3c80..d8790c1b1cfb 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -4122,6 +4122,47 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
 	return ret;
 }
 
+static int io_register_napi(struct io_ring_ctx *ctx, void __user *arg)
+{
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	const struct io_uring_napi curr = {
+		.busy_poll_to = ctx->napi_busy_poll_to,
+	};
+	struct io_uring_napi napi;
+
+	if (copy_from_user(&napi, arg, sizeof(napi)))
+		return -EFAULT;
+	if (napi.pad || napi.resv)
+		return -EINVAL;
+
+	WRITE_ONCE(ctx->napi_busy_poll_to, napi.busy_poll_to);
+
+	if (copy_to_user(arg, &curr, sizeof(curr)))
+		return -EFAULT;
+
+	return 0;
+#else
+	return -EINVAL;
+#endif
+}
+
+static int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
+{
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	const struct io_uring_napi curr = {
+		.busy_poll_to = ctx->napi_busy_poll_to,
+	};
+
+	if (copy_to_user(arg, &curr, sizeof(curr)))
+		return -EFAULT;
+
+	WRITE_ONCE(ctx->napi_busy_poll_to, 0);
+	return 0;
+#else
+	return -EINVAL;
+#endif
+}
+
 static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			       void __user *arg, unsigned nr_args)
 	__releases(ctx->uring_lock)
@@ -4282,6 +4323,18 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			break;
 		ret = io_register_file_alloc_range(ctx, arg);
 		break;
+	case IORING_REGISTER_NAPI:
+		ret = -EINVAL;
+		if (!arg)
+			break;
+		ret = io_register_napi(ctx, arg);
+		break;
+	case IORING_UNREGISTER_NAPI:
+		ret = -EINVAL;
+		if (!arg)
+			break;
+		ret = io_unregister_napi(ctx, arg);
+		break;
 	default:
 		ret = -EINVAL;
 		break;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 3/3] io_uring: add api to set napi prefer busy poll
  2022-11-21 19:14 [PATCH v5 0/3] io_uring: add napi busy polling support Stefan Roesch
  2022-11-21 19:14 ` [PATCH v5 1/3] " Stefan Roesch
  2022-11-21 19:14 ` [PATCH v5 2/3] io_uring: add api to set / get napi configuration Stefan Roesch
@ 2022-11-21 19:14 ` Stefan Roesch
  2 siblings, 0 replies; 11+ messages in thread
From: Stefan Roesch @ 2022-11-21 19:14 UTC (permalink / raw)
  To: kernel-team; +Cc: shr, axboe, olivier, netdev, io-uring, kuba

This adds an api to register and unregister the napi prefer busy poll
setting from liburing. To be able to use this functionality, the
corresponding liburing patch is needed.

Signed-off-by: Stefan Roesch <shr@devkernel.io>
---
 include/uapi/linux/io_uring.h | 3 ++-
 io_uring/io_uring.c           | 6 +++++-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/io_uring.h b/include/uapi/linux/io_uring.h
index 1a713bbafaee..514604c623ae 100644
--- a/include/uapi/linux/io_uring.h
+++ b/include/uapi/linux/io_uring.h
@@ -619,7 +619,8 @@ struct io_uring_buf_reg {
 /* argument for IORING_(UN)REGISTER_NAPI */
 struct io_uring_napi {
 	__u32	busy_poll_to;
-	__u32	pad;
+	__u8	prefer_busy_poll;
+	__u8	pad[3];
 	__u64	resv;
 };
 
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index d8790c1b1cfb..555964310931 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -4127,15 +4127,17 @@ static int io_register_napi(struct io_ring_ctx *ctx, void __user *arg)
 #ifdef CONFIG_NET_RX_BUSY_POLL
 	const struct io_uring_napi curr = {
 		.busy_poll_to = ctx->napi_busy_poll_to,
+		.prefer_busy_poll = ctx->napi_prefer_busy_poll
 	};
 	struct io_uring_napi napi;
 
 	if (copy_from_user(&napi, arg, sizeof(napi)))
 		return -EFAULT;
-	if (napi.pad || napi.resv)
+	if (napi.pad[0] || napi.pad[1] || napi.pad[2] || napi.resv)
 		return -EINVAL;
 
 	WRITE_ONCE(ctx->napi_busy_poll_to, napi.busy_poll_to);
+	WRITE_ONCE(ctx->napi_prefer_busy_poll, !!napi.prefer_busy_poll);
 
 	if (copy_to_user(arg, &curr, sizeof(curr)))
 		return -EFAULT;
@@ -4151,12 +4153,14 @@ static int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
 #ifdef CONFIG_NET_RX_BUSY_POLL
 	const struct io_uring_napi curr = {
 		.busy_poll_to = ctx->napi_busy_poll_to,
+		.prefer_busy_poll = ctx->napi_prefer_busy_poll
 	};
 
 	if (copy_to_user(arg, &curr, sizeof(curr)))
 		return -EFAULT;
 
 	WRITE_ONCE(ctx->napi_busy_poll_to, 0);
+	WRITE_ONCE(ctx->napi_prefer_busy_poll, false);
 	return 0;
 #else
 	return -EINVAL;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/3] io_uring: add napi busy polling support
  2022-11-21 19:14 ` [PATCH v5 1/3] " Stefan Roesch
@ 2022-11-21 19:45   ` Jens Axboe
  2022-11-21 23:59     ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2022-11-21 19:45 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team; +Cc: olivier, netdev, io-uring, kuba

On 11/21/22 12:14?PM, Stefan Roesch wrote:
> +/*
> + * io_napi_add() - Add napi id to the busy poll list
> + * @file: file pointer for socket
> + * @ctx:  io-uring context
> + *
> + * Add the napi id of the socket to the napi busy poll list.
> + */
> +void io_napi_add(struct file *file, struct io_ring_ctx *ctx)
> +{
> +	unsigned int napi_id;
> +	struct socket *sock;
> +	struct sock *sk;
> +	struct io_napi_entry *ne;
> +
> +	if (!io_napi_busy_loop_on(ctx))
> +		return;
> +
> +	sock = sock_from_file(file);
> +	if (!sock)
> +		return;
> +
> +	sk = sock->sk;
> +	if (!sk)
> +		return;
> +
> +	napi_id = READ_ONCE(sk->sk_napi_id);
> +
> +	/* Non-NAPI IDs can be rejected */
> +	if (napi_id < MIN_NAPI_ID)
> +		return;
> +
> +	spin_lock(&ctx->napi_lock);
> +	list_for_each_entry(ne, &ctx->napi_list, list) {
> +		if (ne->napi_id == napi_id) {
> +			ne->timeout = jiffies + NAPI_TIMEOUT;
> +			goto out;
> +		}
> +	}
> +
> +	ne = kmalloc(sizeof(*ne), GFP_NOWAIT);
> +	if (!ne)
> +		goto out;
> +
> +	ne->napi_id = napi_id;
> +	ne->timeout = jiffies + NAPI_TIMEOUT;
> +	list_add_tail(&ne->list, &ctx->napi_list);
> +
> +out:
> +	spin_unlock(&ctx->napi_lock);
> +}

I think this all looks good now, just one minor comment on the above. Is
the expectation here that we'll basically always add to the napi list?
If so, then I think allocating 'ne' outside the spinlock would be a lot
saner, and then just kfree() it for the unlikely case where we find a
duplicate.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] io_uring: add api to set / get napi configuration.
  2022-11-21 19:14 ` [PATCH v5 2/3] io_uring: add api to set / get napi configuration Stefan Roesch
@ 2022-11-21 19:46   ` Jens Axboe
  2022-11-22 13:13     ` Ammar Faizi
  2022-11-25 21:43   ` Ammar Faizi
  1 sibling, 1 reply; 11+ messages in thread
From: Jens Axboe @ 2022-11-21 19:46 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team; +Cc: olivier, netdev, io-uring, kuba

On 11/21/22 12:14?PM, Stefan Roesch wrote:
> +static int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
> +{
> +#ifdef CONFIG_NET_RX_BUSY_POLL
> +	const struct io_uring_napi curr = {
> +		.busy_poll_to = ctx->napi_busy_poll_to,
> +	};
> +
> +	if (copy_to_user(arg, &curr, sizeof(curr)))
> +		return -EFAULT;
> +
> +	WRITE_ONCE(ctx->napi_busy_poll_to, 0);
> +	return 0;
> +#else
> +	return -EINVAL;
> +#endif
> +}

Should probably check resv/pad here as well, maybe even the
'busy_poll_to' being zero?

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/3] io_uring: add napi busy polling support
  2022-11-21 19:45   ` Jens Axboe
@ 2022-11-21 23:59     ` Jens Axboe
  0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2022-11-21 23:59 UTC (permalink / raw)
  To: Stefan Roesch, kernel-team; +Cc: olivier, netdev, io-uring, kuba

On 11/21/22 12:45?PM, Jens Axboe wrote:
> On 11/21/22 12:14?PM, Stefan Roesch wrote:
>> +/*
>> + * io_napi_add() - Add napi id to the busy poll list
>> + * @file: file pointer for socket
>> + * @ctx:  io-uring context
>> + *
>> + * Add the napi id of the socket to the napi busy poll list.
>> + */
>> +void io_napi_add(struct file *file, struct io_ring_ctx *ctx)
>> +{
>> +	unsigned int napi_id;
>> +	struct socket *sock;
>> +	struct sock *sk;
>> +	struct io_napi_entry *ne;
>> +
>> +	if (!io_napi_busy_loop_on(ctx))
>> +		return;
>> +
>> +	sock = sock_from_file(file);
>> +	if (!sock)
>> +		return;
>> +
>> +	sk = sock->sk;
>> +	if (!sk)
>> +		return;
>> +
>> +	napi_id = READ_ONCE(sk->sk_napi_id);
>> +
>> +	/* Non-NAPI IDs can be rejected */
>> +	if (napi_id < MIN_NAPI_ID)
>> +		return;
>> +
>> +	spin_lock(&ctx->napi_lock);
>> +	list_for_each_entry(ne, &ctx->napi_list, list) {
>> +		if (ne->napi_id == napi_id) {
>> +			ne->timeout = jiffies + NAPI_TIMEOUT;
>> +			goto out;
>> +		}
>> +	}
>> +
>> +	ne = kmalloc(sizeof(*ne), GFP_NOWAIT);
>> +	if (!ne)
>> +		goto out;
>> +
>> +	ne->napi_id = napi_id;
>> +	ne->timeout = jiffies + NAPI_TIMEOUT;
>> +	list_add_tail(&ne->list, &ctx->napi_list);
>> +
>> +out:
>> +	spin_unlock(&ctx->napi_lock);
>> +}
> 
> I think this all looks good now, just one minor comment on the above. Is
> the expectation here that we'll basically always add to the napi list?
> If so, then I think allocating 'ne' outside the spinlock would be a lot
> saner, and then just kfree() it for the unlikely case where we find a
> duplicate.

After thinking about this a bit more, I don't think this is done in the
most optimal fashion. If the list is longer than a few entries, this
check (or check-alloc-insert) is pretty expensive and it'll add
substantial overhead to the poll path for sockets if napi is enabled.

I think we should do something ala:

1) When arming poll AND napi has been enabled for the ring, then
    alloc io_napi_entry upfront and store it in ->async_data.

2) Maintain the state in the io_napi_entry. If we're on the list,
    that can be checked with just list_empty(), for example. If not
    on the list, assign timeout and add.

3) Have regular request cleanup free it.

This could be combined with an alloc cache, I would not do that for the
first iteration though.

This would make io_napi_add() cheap - no more list iteration, and no
more allocations. And that is arguably the most important part, as that
is called everytime the poll is woken up. Particularly for multishot
that makes a big difference.

It's also designed much better imho, moving the more expensive bits to
the setup side.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] io_uring: add api to set / get napi configuration.
  2022-11-21 19:46   ` Jens Axboe
@ 2022-11-22 13:13     ` Ammar Faizi
  2022-11-22 13:19       ` Jens Axboe
  0 siblings, 1 reply; 11+ messages in thread
From: Ammar Faizi @ 2022-11-22 13:13 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Stefan Roesch, Facebook Kernel Team, Olivier Langlois,
	netdev Mailing List, io-uring Mailing List, Jakub Kicinski

On 11/22/22 2:46 AM, Jens Axboe wrote:
> On 11/21/22 12:14?PM, Stefan Roesch wrote:
>> +static int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
>> +{
>> +#ifdef CONFIG_NET_RX_BUSY_POLL
>> +	const struct io_uring_napi curr = {
>> +		.busy_poll_to = ctx->napi_busy_poll_to,
>> +	};
>> +
>> +	if (copy_to_user(arg, &curr, sizeof(curr)))
>> +		return -EFAULT;
>> +
>> +	WRITE_ONCE(ctx->napi_busy_poll_to, 0);
>> +	return 0;
>> +#else
>> +	return -EINVAL;
>> +#endif
>> +}
> 
> Should probably check resv/pad here as well, maybe even the
> 'busy_poll_to' being zero?

Jens, this function doesn't read from __user memory, it writes to
__user memory.

@curr.resv and @curr.pad are on the kernel's stack. Both are already
implicitly initialized to zero by the partial struct initializer.

-- 
Ammar Faizi


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] io_uring: add api to set / get napi configuration.
  2022-11-22 13:13     ` Ammar Faizi
@ 2022-11-22 13:19       ` Jens Axboe
  0 siblings, 0 replies; 11+ messages in thread
From: Jens Axboe @ 2022-11-22 13:19 UTC (permalink / raw)
  To: Ammar Faizi
  Cc: Stefan Roesch, Facebook Kernel Team, Olivier Langlois,
	netdev Mailing List, io-uring Mailing List, Jakub Kicinski

On 11/22/22 6:13 AM, Ammar Faizi wrote:
> On 11/22/22 2:46 AM, Jens Axboe wrote:
>> On 11/21/22 12:14?PM, Stefan Roesch wrote:
>>> +static int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
>>> +{
>>> +#ifdef CONFIG_NET_RX_BUSY_POLL
>>> +    const struct io_uring_napi curr = {
>>> +        .busy_poll_to = ctx->napi_busy_poll_to,
>>> +    };
>>> +
>>> +    if (copy_to_user(arg, &curr, sizeof(curr)))
>>> +        return -EFAULT;
>>> +
>>> +    WRITE_ONCE(ctx->napi_busy_poll_to, 0);
>>> +    return 0;
>>> +#else
>>> +    return -EINVAL;
>>> +#endif
>>> +}
>>
>> Should probably check resv/pad here as well, maybe even the
>> 'busy_poll_to' being zero?
> 
> Jens, this function doesn't read from __user memory, it writes to
> __user memory.
> 
> @curr.resv and @curr.pad are on the kernel's stack. Both are already
> implicitly initialized to zero by the partial struct initializer.

Oh yes, guess I totally missed that we don't care about the value
at all (just zero the target) and copy back the old values.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] io_uring: add api to set / get napi configuration.
  2022-11-21 19:14 ` [PATCH v5 2/3] io_uring: add api to set / get napi configuration Stefan Roesch
  2022-11-21 19:46   ` Jens Axboe
@ 2022-11-25 21:43   ` Ammar Faizi
  2022-11-28 20:22     ` Stefan Roesch
  1 sibling, 1 reply; 11+ messages in thread
From: Ammar Faizi @ 2022-11-25 21:43 UTC (permalink / raw)
  To: Facebook Kernel Team, Stefan Roesch
  Cc: Jens Axboe, Olivier Langlois, Jakub Kicinski,
	netdev Mailing List, io-uring Mailing List

On 11/22/22 2:14 AM, Stefan Roesch wrote:
> +static int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
> +{
> +#ifdef CONFIG_NET_RX_BUSY_POLL
> +	const struct io_uring_napi curr = {
> +		.busy_poll_to = ctx->napi_busy_poll_to,
> +	};
> +
> +	if (copy_to_user(arg, &curr, sizeof(curr)))
> +		return -EFAULT;
> +
> +	WRITE_ONCE(ctx->napi_busy_poll_to, 0);
> +	return 0;
> +#else
> +	return -EINVAL;
> +#endif
> +}
> +
I suggest allowing users to pass a NULL as the arg in case they
don't want to care about the old values.

Something like:

    io_uring_unregister_napi(ring, NULL);

What do you think?

-- 
Ammar Faizi


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 2/3] io_uring: add api to set / get napi configuration.
  2022-11-25 21:43   ` Ammar Faizi
@ 2022-11-28 20:22     ` Stefan Roesch
  0 siblings, 0 replies; 11+ messages in thread
From: Stefan Roesch @ 2022-11-28 20:22 UTC (permalink / raw)
  To: Ammar Faizi
  Cc: Facebook Kernel Team, Jens Axboe, Olivier Langlois,
	Jakub Kicinski, netdev Mailing List, io-uring Mailing List


Ammar Faizi <ammarfaizi2@gnuweeb.org> writes:

> On 11/22/22 2:14 AM, Stefan Roesch wrote:
>> +static int io_unregister_napi(struct io_ring_ctx *ctx, void __user *arg)
>> +{
>> +#ifdef CONFIG_NET_RX_BUSY_POLL
>> +	const struct io_uring_napi curr = {
>> +		.busy_poll_to = ctx->napi_busy_poll_to,
>> +	};
>> +
>> +	if (copy_to_user(arg, &curr, sizeof(curr)))
>> +		return -EFAULT;
>> +
>> +	WRITE_ONCE(ctx->napi_busy_poll_to, 0);
>> +	return 0;
>> +#else
>> +	return -EINVAL;
>> +#endif
>> +}
>> +
> I suggest allowing users to pass a NULL as the arg in case they
> don't want to care about the old values.
>
> Something like:
>
>    io_uring_unregister_napi(ring, NULL);
>
> What do you think?

Sounds good, i can make that change in the next version.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-11-28 20:23 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-21 19:14 [PATCH v5 0/3] io_uring: add napi busy polling support Stefan Roesch
2022-11-21 19:14 ` [PATCH v5 1/3] " Stefan Roesch
2022-11-21 19:45   ` Jens Axboe
2022-11-21 23:59     ` Jens Axboe
2022-11-21 19:14 ` [PATCH v5 2/3] io_uring: add api to set / get napi configuration Stefan Roesch
2022-11-21 19:46   ` Jens Axboe
2022-11-22 13:13     ` Ammar Faizi
2022-11-22 13:19       ` Jens Axboe
2022-11-25 21:43   ` Ammar Faizi
2022-11-28 20:22     ` Stefan Roesch
2022-11-21 19:14 ` [PATCH v5 3/3] io_uring: add api to set napi prefer busy poll Stefan Roesch

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.