All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register
@ 2022-02-04 14:51 Usama Arif
  2022-02-04 14:51 ` [PATCH v6 1/5] io_uring: remove trace for eventfd Usama Arif
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Usama Arif @ 2022-02-04 14:51 UTC (permalink / raw)
  To: io-uring, axboe, asml.silence, linux-kernel; +Cc: fam.zheng, Usama Arif

Ring quiesce is currently used for registering/unregistering eventfds,
registering restrictions and enabling rings.

For opcodes relating to registering/unregistering eventfds, ring quiesce
can be avoided by creating a new RCU data structure (io_ev_fd) as part
of io_ring_ctx that holds the eventfd_ctx, with reads to the structure
protected by rcu_read_lock and writes (register/unregister calls)
protected by a mutex.

With the above approach ring quiesce can be avoided which is much more
expensive then using RCU lock. On the system tested, io_uring_reigster with
IORING_REGISTER_EVENTFD takes less than 1ms with RCU lock, compared to 15ms
before with ring quiesce.

IORING_SETUP_R_DISABLED prevents submitting requests and
so there will be no requests until IORING_REGISTER_ENABLE_RINGS
is called. And IORING_REGISTER_RESTRICTIONS works only before
IORING_REGISTER_ENABLE_RINGS is called. Hence ring quiesce is
not needed for these opcodes.

---
v5->v6:
- Split removing ring quiesce completely from io_uring_register into
2 patches (Pavel Begunkov)
- Removed extra mutex while registering/unregistering eventfd as uring_lock
can be used (Pavel Begunkov)
- Move setting ctx->evfd to NULL from io_eventfd_put to before call_rcu
(Pavel Begunkov)

v4->v5:
- Remove ring quiesce completely from io_uring_register (Pavel Begunkov)
- Replaced rcu_barrier with unregistering flag (Jens Axboe)
- Created a faster check for ctx->io_ev_fd in io_eventfd_signal and cleaned up
io_eventfd_unregister (Jens Axboe)

v3->v4:
- Switch back to call_rcu and use rcu_barrier incase io_eventfd_register fails
to make sure all rcu callbacks have finished.

v2->v3:
- Switched to using synchronize_rcu from call_rcu in io_eventfd_unregister.

v1->v2:
- Added patch to remove eventfd from tracepoint (Patch 1) (Jens Axboe)
- Made the code of io_should_trigger_evfd as part of io_eventfd_signal (Jens Axboe)

Usama Arif (5):
  io_uring: remove trace for eventfd
  io_uring: avoid ring quiesce while registering/unregistering eventfd
  io_uring: avoid ring quiesce while registering async eventfd
  io_uring: avoid ring quiesce while registering restrictions and
    enabling rings
  io_uring: remove ring quiesce for io_uring_register

 fs/io_uring.c                   | 179 +++++++++++++-------------------
 include/trace/events/io_uring.h |  13 +--
 2 files changed, 75 insertions(+), 117 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v6 1/5] io_uring: remove trace for eventfd
  2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
@ 2022-02-04 14:51 ` Usama Arif
  2022-02-04 14:51 ` [PATCH v6 2/5] io_uring: avoid ring quiesce while registering/unregistering eventfd Usama Arif
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Usama Arif @ 2022-02-04 14:51 UTC (permalink / raw)
  To: io-uring, axboe, asml.silence, linux-kernel; +Cc: fam.zheng, Usama Arif

The information on whether eventfd is registered is not very useful
and would result in the tracepoint being enclosed in an rcu_readlock
in a later patch that tries to avoid ring quiesce for registering
eventfd.

Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 fs/io_uring.c                   |  3 +--
 include/trace/events/io_uring.h | 13 +++++--------
 2 files changed, 6 insertions(+), 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 2e04f718319d..21531609a9c6 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -11171,8 +11171,7 @@ SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, unsigned int, opcode,
 	mutex_lock(&ctx->uring_lock);
 	ret = __io_uring_register(ctx, opcode, arg, nr_args);
 	mutex_unlock(&ctx->uring_lock);
-	trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs,
-							ctx->cq_ev_fd != NULL, ret);
+	trace_io_uring_register(ctx, opcode, ctx->nr_user_files, ctx->nr_user_bufs, ret);
 out_fput:
 	fdput(f);
 	return ret;
diff --git a/include/trace/events/io_uring.h b/include/trace/events/io_uring.h
index 7346f0164cf4..098beda7601a 100644
--- a/include/trace/events/io_uring.h
+++ b/include/trace/events/io_uring.h
@@ -57,10 +57,9 @@ TRACE_EVENT(io_uring_create,
  * @opcode:		describes which operation to perform
  * @nr_user_files:	number of registered files
  * @nr_user_bufs:	number of registered buffers
- * @cq_ev_fd:		whether eventfs registered or not
  * @ret:		return code
  *
- * Allows to trace fixed files/buffers/eventfds, that could be registered to
+ * Allows to trace fixed files/buffers, that could be registered to
  * avoid an overhead of getting references to them for every operation. This
  * event, together with io_uring_file_get, can provide a full picture of how
  * much overhead one can reduce via fixing.
@@ -68,16 +67,15 @@ TRACE_EVENT(io_uring_create,
 TRACE_EVENT(io_uring_register,
 
 	TP_PROTO(void *ctx, unsigned opcode, unsigned nr_files,
-			 unsigned nr_bufs, bool eventfd, long ret),
+			 unsigned nr_bufs, long ret),
 
-	TP_ARGS(ctx, opcode, nr_files, nr_bufs, eventfd, ret),
+	TP_ARGS(ctx, opcode, nr_files, nr_bufs, ret),
 
 	TP_STRUCT__entry (
 		__field(  void *,	ctx			)
 		__field(  unsigned,	opcode		)
 		__field(  unsigned,	nr_files	)
 		__field(  unsigned,	nr_bufs		)
-		__field(  bool,		eventfd		)
 		__field(  long,		ret			)
 	),
 
@@ -86,14 +84,13 @@ TRACE_EVENT(io_uring_register,
 		__entry->opcode		= opcode;
 		__entry->nr_files	= nr_files;
 		__entry->nr_bufs	= nr_bufs;
-		__entry->eventfd	= eventfd;
 		__entry->ret		= ret;
 	),
 
 	TP_printk("ring %p, opcode %d, nr_user_files %d, nr_user_bufs %d, "
-			  "eventfd %d, ret %ld",
+			  "ret %ld",
 			  __entry->ctx, __entry->opcode, __entry->nr_files,
-			  __entry->nr_bufs, __entry->eventfd, __entry->ret)
+			  __entry->nr_bufs, __entry->ret)
 );
 
 /**
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 2/5] io_uring: avoid ring quiesce while registering/unregistering eventfd
  2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
  2022-02-04 14:51 ` [PATCH v6 1/5] io_uring: remove trace for eventfd Usama Arif
@ 2022-02-04 14:51 ` Usama Arif
  2022-02-04 14:51 ` [PATCH v6 3/5] io_uring: avoid ring quiesce while registering async eventfd Usama Arif
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Usama Arif @ 2022-02-04 14:51 UTC (permalink / raw)
  To: io-uring, axboe, asml.silence, linux-kernel; +Cc: fam.zheng, Usama Arif

This is done by creating a new RCU data structure (io_ev_fd) as part of
io_ring_ctx that holds the eventfd_ctx.

The function io_eventfd_signal is executed under rcu_read_lock with a
single rcu_dereference to io_ev_fd so that if another thread unregisters
the eventfd while io_eventfd_signal is still being executed, the
eventfd_signal for which io_eventfd_signal was called completes
successfully.

The process of registering/unregistering eventfd is already done
under uring_lock so multiple threads won't enter a race condition while
registering/unregistering eventfd.

With the above approach ring quiesce can be avoided which is much more
expensive then using RCU lock. On the system tested, io_uring_reigster with
IORING_REGISTER_EVENTFD takes less than 1ms with RCU lock, compared to 15ms
before with ring quiesce.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 fs/io_uring.c | 81 ++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 61 insertions(+), 20 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 21531609a9c6..ad6361aeaca7 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -326,6 +326,11 @@ struct io_submit_state {
 	struct blk_plug		plug;
 };
 
+struct io_ev_fd {
+	struct eventfd_ctx	*cq_ev_fd;
+	struct rcu_head		rcu;
+};
+
 struct io_ring_ctx {
 	/* const or read-mostly hot data */
 	struct {
@@ -399,7 +404,7 @@ struct io_ring_ctx {
 	struct {
 		unsigned		cached_cq_tail;
 		unsigned		cq_entries;
-		struct eventfd_ctx	*cq_ev_fd;
+		struct io_ev_fd	__rcu	*io_ev_fd;
 		struct wait_queue_head	cq_wait;
 		unsigned		cq_extra;
 		atomic_t		cq_timeouts;
@@ -1726,13 +1731,32 @@ static inline struct io_uring_cqe *io_get_cqe(struct io_ring_ctx *ctx)
 	return &rings->cqes[tail & mask];
 }
 
-static inline bool io_should_trigger_evfd(struct io_ring_ctx *ctx)
+static void io_eventfd_signal(struct io_ring_ctx *ctx)
 {
-	if (likely(!ctx->cq_ev_fd))
-		return false;
+	struct io_ev_fd *ev_fd;
+
+	/* Return quickly if ctx->io_ev_fd doesn't exist */
+	if (likely(!rcu_dereference_raw(ctx->io_ev_fd)))
+		return;
+
+	rcu_read_lock();
+	/* rcu_dereference ctx->io_ev_fd once and use it for both for checking and eventfd_signal */
+	ev_fd = rcu_dereference(ctx->io_ev_fd);
+
+	/*
+	 * Check again if ev_fd exists incase an io_eventfd_unregister call completed between
+	 * the NULL check of ctx->io_ev_fd at the start of the function and rcu_read_lock.
+	 */
+	if (unlikely(!ev_fd))
+		goto out;
 	if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
-		return false;
-	return !ctx->eventfd_async || io_wq_current_is_worker();
+		goto out;
+
+	if (!ctx->eventfd_async || io_wq_current_is_worker())
+		eventfd_signal(ev_fd->cq_ev_fd, 1);
+
+out:
+	rcu_read_unlock();
 }
 
 /*
@@ -1751,8 +1775,7 @@ static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
 	 */
 	if (wq_has_sleeper(&ctx->cq_wait))
 		wake_up_all(&ctx->cq_wait);
-	if (io_should_trigger_evfd(ctx))
-		eventfd_signal(ctx->cq_ev_fd, 1);
+	io_eventfd_signal(ctx);
 }
 
 static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
@@ -1764,8 +1787,7 @@ static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
 		if (waitqueue_active(&ctx->cq_wait))
 			wake_up_all(&ctx->cq_wait);
 	}
-	if (io_should_trigger_evfd(ctx))
-		eventfd_signal(ctx->cq_ev_fd, 1);
+	io_eventfd_signal(ctx);
 }
 
 /* Returns true if there are no backlogged entries after the flush */
@@ -9353,31 +9375,48 @@ static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
 
 static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg)
 {
+	struct io_ev_fd *ev_fd;
 	__s32 __user *fds = arg;
-	int fd;
+	int fd, ret;
 
-	if (ctx->cq_ev_fd)
+	ev_fd = rcu_dereference_protected(ctx->io_ev_fd, lockdep_is_held(&ctx->uring_lock));
+	if (ev_fd)
 		return -EBUSY;
 
 	if (copy_from_user(&fd, fds, sizeof(*fds)))
 		return -EFAULT;
 
-	ctx->cq_ev_fd = eventfd_ctx_fdget(fd);
-	if (IS_ERR(ctx->cq_ev_fd)) {
-		int ret = PTR_ERR(ctx->cq_ev_fd);
+	ev_fd = kmalloc(sizeof(*ev_fd), GFP_KERNEL);
+	if (!ev_fd)
+		return -ENOMEM;
 
-		ctx->cq_ev_fd = NULL;
+	ev_fd->cq_ev_fd = eventfd_ctx_fdget(fd);
+	if (IS_ERR(ev_fd->cq_ev_fd)) {
+		ret = PTR_ERR(ev_fd->cq_ev_fd);
+		kfree(ev_fd);
 		return ret;
 	}
 
-	return 0;
+	rcu_assign_pointer(ctx->io_ev_fd, ev_fd);
+	return ret;
+}
+
+static void io_eventfd_put(struct rcu_head *rcu)
+{
+	struct io_ev_fd *ev_fd = container_of(rcu, struct io_ev_fd, rcu);
+
+	eventfd_ctx_put(ev_fd->cq_ev_fd);
+	kfree(ev_fd);
 }
 
 static int io_eventfd_unregister(struct io_ring_ctx *ctx)
 {
-	if (ctx->cq_ev_fd) {
-		eventfd_ctx_put(ctx->cq_ev_fd);
-		ctx->cq_ev_fd = NULL;
+	struct io_ev_fd *ev_fd;
+
+	ev_fd = rcu_dereference_protected(ctx->io_ev_fd, lockdep_is_held(&ctx->uring_lock));
+	if (ev_fd) {
+		rcu_assign_pointer(ctx->io_ev_fd, NULL);
+		call_rcu(&ev_fd->rcu, io_eventfd_put);
 		return 0;
 	}
 
@@ -10960,6 +10999,8 @@ static bool io_register_op_must_quiesce(int op)
 	case IORING_REGISTER_FILES:
 	case IORING_UNREGISTER_FILES:
 	case IORING_REGISTER_FILES_UPDATE:
+	case IORING_REGISTER_EVENTFD:
+	case IORING_UNREGISTER_EVENTFD:
 	case IORING_REGISTER_PROBE:
 	case IORING_REGISTER_PERSONALITY:
 	case IORING_UNREGISTER_PERSONALITY:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 3/5] io_uring: avoid ring quiesce while registering async eventfd
  2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
  2022-02-04 14:51 ` [PATCH v6 1/5] io_uring: remove trace for eventfd Usama Arif
  2022-02-04 14:51 ` [PATCH v6 2/5] io_uring: avoid ring quiesce while registering/unregistering eventfd Usama Arif
@ 2022-02-04 14:51 ` Usama Arif
  2022-02-04 14:51 ` [PATCH v6 4/5] io_uring: avoid ring quiesce while registering restrictions and enabling rings Usama Arif
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Usama Arif @ 2022-02-04 14:51 UTC (permalink / raw)
  To: io-uring, axboe, asml.silence, linux-kernel; +Cc: fam.zheng, Usama Arif

This is done using the RCU data structure (io_ev_fd). eventfd_async
is moved from io_ring_ctx to io_ev_fd which is RCU protected hence
avoiding ring quiesce which is much more expensive than an RCU lock.
The place where eventfd_async is read is already under rcu_read_lock
so there is no extra RCU read-side critical section needed.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 fs/io_uring.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index ad6361aeaca7..671c57f9c1fa 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -328,6 +328,7 @@ struct io_submit_state {
 
 struct io_ev_fd {
 	struct eventfd_ctx	*cq_ev_fd;
+	unsigned int		eventfd_async: 1;
 	struct rcu_head		rcu;
 };
 
@@ -340,7 +341,6 @@ struct io_ring_ctx {
 		unsigned int		flags;
 		unsigned int		compat: 1;
 		unsigned int		drain_next: 1;
-		unsigned int		eventfd_async: 1;
 		unsigned int		restricted: 1;
 		unsigned int		off_timeout_used: 1;
 		unsigned int		drain_active: 1;
@@ -1752,7 +1752,7 @@ static void io_eventfd_signal(struct io_ring_ctx *ctx)
 	if (READ_ONCE(ctx->rings->cq_flags) & IORING_CQ_EVENTFD_DISABLED)
 		goto out;
 
-	if (!ctx->eventfd_async || io_wq_current_is_worker())
+	if (!ev_fd->eventfd_async || io_wq_current_is_worker())
 		eventfd_signal(ev_fd->cq_ev_fd, 1);
 
 out:
@@ -9373,7 +9373,8 @@ static int __io_sqe_buffers_update(struct io_ring_ctx *ctx,
 	return done ? done : err;
 }
 
-static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg)
+static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg,
+			       unsigned int eventfd_async)
 {
 	struct io_ev_fd *ev_fd;
 	__s32 __user *fds = arg;
@@ -9396,6 +9397,7 @@ static int io_eventfd_register(struct io_ring_ctx *ctx, void __user *arg)
 		kfree(ev_fd);
 		return ret;
 	}
+	ev_fd->eventfd_async = eventfd_async;
 
 	rcu_assign_pointer(ctx->io_ev_fd, ev_fd);
 	return ret;
@@ -11000,6 +11002,7 @@ static bool io_register_op_must_quiesce(int op)
 	case IORING_UNREGISTER_FILES:
 	case IORING_REGISTER_FILES_UPDATE:
 	case IORING_REGISTER_EVENTFD:
+	case IORING_REGISTER_EVENTFD_ASYNC:
 	case IORING_UNREGISTER_EVENTFD:
 	case IORING_REGISTER_PROBE:
 	case IORING_REGISTER_PERSONALITY:
@@ -11100,17 +11103,16 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 		ret = io_register_files_update(ctx, arg, nr_args);
 		break;
 	case IORING_REGISTER_EVENTFD:
-	case IORING_REGISTER_EVENTFD_ASYNC:
 		ret = -EINVAL;
 		if (nr_args != 1)
 			break;
-		ret = io_eventfd_register(ctx, arg);
-		if (ret)
+		ret = io_eventfd_register(ctx, arg, 0);
+		break;
+	case IORING_REGISTER_EVENTFD_ASYNC:
+		ret = -EINVAL;
+		if (nr_args != 1)
 			break;
-		if (opcode == IORING_REGISTER_EVENTFD_ASYNC)
-			ctx->eventfd_async = 1;
-		else
-			ctx->eventfd_async = 0;
+		ret = io_eventfd_register(ctx, arg, 1);
 		break;
 	case IORING_UNREGISTER_EVENTFD:
 		ret = -EINVAL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 4/5] io_uring: avoid ring quiesce while registering restrictions and enabling rings
  2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
                   ` (2 preceding siblings ...)
  2022-02-04 14:51 ` [PATCH v6 3/5] io_uring: avoid ring quiesce while registering async eventfd Usama Arif
@ 2022-02-04 14:51 ` Usama Arif
  2022-02-04 14:51 ` [PATCH v6 5/5] io_uring: remove ring quiesce for io_uring_register Usama Arif
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Usama Arif @ 2022-02-04 14:51 UTC (permalink / raw)
  To: io-uring, axboe, asml.silence, linux-kernel; +Cc: fam.zheng, Usama Arif

IORING_SETUP_R_DISABLED prevents submitting requests and
so there will be no requests until IORING_REGISTER_ENABLE_RINGS
is called. And IORING_REGISTER_RESTRICTIONS works only before
IORING_REGISTER_ENABLE_RINGS is called. Hence ring quiesce is
not needed for these opcodes.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 fs/io_uring.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 671c57f9c1fa..a2ce2601d4de 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -11007,6 +11007,8 @@ static bool io_register_op_must_quiesce(int op)
 	case IORING_REGISTER_PROBE:
 	case IORING_REGISTER_PERSONALITY:
 	case IORING_UNREGISTER_PERSONALITY:
+	case IORING_REGISTER_ENABLE_RINGS:
+	case IORING_REGISTER_RESTRICTIONS:
 	case IORING_REGISTER_FILES2:
 	case IORING_REGISTER_FILES_UPDATE2:
 	case IORING_REGISTER_BUFFERS2:
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v6 5/5] io_uring: remove ring quiesce for io_uring_register
  2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
                   ` (3 preceding siblings ...)
  2022-02-04 14:51 ` [PATCH v6 4/5] io_uring: avoid ring quiesce while registering restrictions and enabling rings Usama Arif
@ 2022-02-04 14:51 ` Usama Arif
  2022-07-15 15:44   ` Michal Koutný
  2022-02-04 15:53 ` [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Jens Axboe
  2022-02-04 16:19 ` Jens Axboe
  6 siblings, 1 reply; 13+ messages in thread
From: Usama Arif @ 2022-02-04 14:51 UTC (permalink / raw)
  To: io-uring, axboe, asml.silence, linux-kernel; +Cc: fam.zheng, Usama Arif

None of the opcodes in io_uring_register use ring quiesce
anymore. Hence io_register_op_must_quiesce always returns
false and io_ctx_quiesce is never called.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 fs/io_uring.c | 83 ---------------------------------------------------
 1 file changed, 83 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index a2ce2601d4de..ad8f84376955 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1292,18 +1292,6 @@ static inline unsigned int io_put_kbuf(struct io_kiocb *req)
 	return __io_put_kbuf(req);
 }
 
-static void io_refs_resurrect(struct percpu_ref *ref, struct completion *compl)
-{
-	bool got = percpu_ref_tryget(ref);
-
-	/* already at zero, wait for ->release() */
-	if (!got)
-		wait_for_completion(compl);
-	percpu_ref_resurrect(ref);
-	if (got)
-		percpu_ref_put(ref);
-}
-
 static bool io_match_task(struct io_kiocb *head, struct task_struct *task,
 			  bool cancel_all)
 	__must_hold(&req->ctx->timeout_lock)
@@ -10993,66 +10981,6 @@ static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx,
 	return ret;
 }
 
-static bool io_register_op_must_quiesce(int op)
-{
-	switch (op) {
-	case IORING_REGISTER_BUFFERS:
-	case IORING_UNREGISTER_BUFFERS:
-	case IORING_REGISTER_FILES:
-	case IORING_UNREGISTER_FILES:
-	case IORING_REGISTER_FILES_UPDATE:
-	case IORING_REGISTER_EVENTFD:
-	case IORING_REGISTER_EVENTFD_ASYNC:
-	case IORING_UNREGISTER_EVENTFD:
-	case IORING_REGISTER_PROBE:
-	case IORING_REGISTER_PERSONALITY:
-	case IORING_UNREGISTER_PERSONALITY:
-	case IORING_REGISTER_ENABLE_RINGS:
-	case IORING_REGISTER_RESTRICTIONS:
-	case IORING_REGISTER_FILES2:
-	case IORING_REGISTER_FILES_UPDATE2:
-	case IORING_REGISTER_BUFFERS2:
-	case IORING_REGISTER_BUFFERS_UPDATE:
-	case IORING_REGISTER_IOWQ_AFF:
-	case IORING_UNREGISTER_IOWQ_AFF:
-	case IORING_REGISTER_IOWQ_MAX_WORKERS:
-		return false;
-	default:
-		return true;
-	}
-}
-
-static __cold int io_ctx_quiesce(struct io_ring_ctx *ctx)
-{
-	long ret;
-
-	percpu_ref_kill(&ctx->refs);
-
-	/*
-	 * Drop uring mutex before waiting for references to exit. If another
-	 * thread is currently inside io_uring_enter() it might need to grab the
-	 * uring_lock to make progress. If we hold it here across the drain
-	 * wait, then we can deadlock. It's safe to drop the mutex here, since
-	 * no new references will come in after we've killed the percpu ref.
-	 */
-	mutex_unlock(&ctx->uring_lock);
-	do {
-		ret = wait_for_completion_interruptible_timeout(&ctx->ref_comp, HZ);
-		if (ret) {
-			ret = min(0L, ret);
-			break;
-		}
-
-		ret = io_run_task_work_sig();
-		io_req_caches_free(ctx);
-	} while (ret >= 0);
-	mutex_lock(&ctx->uring_lock);
-
-	if (ret)
-		io_refs_resurrect(&ctx->refs, &ctx->ref_comp);
-	return ret;
-}
-
 static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			       void __user *arg, unsigned nr_args)
 	__releases(ctx->uring_lock)
@@ -11076,12 +11004,6 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 			return -EACCES;
 	}
 
-	if (io_register_op_must_quiesce(opcode)) {
-		ret = io_ctx_quiesce(ctx);
-		if (ret)
-			return ret;
-	}
-
 	switch (opcode) {
 	case IORING_REGISTER_BUFFERS:
 		ret = io_sqe_buffers_register(ctx, arg, nr_args, NULL);
@@ -11186,11 +11108,6 @@ static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode,
 		break;
 	}
 
-	if (io_register_op_must_quiesce(opcode)) {
-		/* bring the ctx back to life */
-		percpu_ref_reinit(&ctx->refs);
-		reinit_completion(&ctx->ref_comp);
-	}
 	return ret;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register
  2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
                   ` (4 preceding siblings ...)
  2022-02-04 14:51 ` [PATCH v6 5/5] io_uring: remove ring quiesce for io_uring_register Usama Arif
@ 2022-02-04 15:53 ` Jens Axboe
  2022-02-04 16:19 ` Jens Axboe
  6 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-02-04 15:53 UTC (permalink / raw)
  To: Usama Arif, io-uring, asml.silence, linux-kernel; +Cc: fam.zheng

On 2/4/22 7:51 AM, Usama Arif wrote:
> Ring quiesce is currently used for registering/unregistering eventfds,
> registering restrictions and enabling rings.
> 
> For opcodes relating to registering/unregistering eventfds, ring quiesce
> can be avoided by creating a new RCU data structure (io_ev_fd) as part
> of io_ring_ctx that holds the eventfd_ctx, with reads to the structure
> protected by rcu_read_lock and writes (register/unregister calls)
> protected by a mutex.
> 
> With the above approach ring quiesce can be avoided which is much more
> expensive then using RCU lock. On the system tested, io_uring_reigster with
> IORING_REGISTER_EVENTFD takes less than 1ms with RCU lock, compared to 15ms
> before with ring quiesce.
> 
> IORING_SETUP_R_DISABLED prevents submitting requests and
> so there will be no requests until IORING_REGISTER_ENABLE_RINGS
> is called. And IORING_REGISTER_RESTRICTIONS works only before
> IORING_REGISTER_ENABLE_RINGS is called. Hence ring quiesce is
> not needed for these opcodes.

I wrote a simple test case just verifying register+unregister, and also
doing a loop to catch any issues around that. Here's the current kernel:

[root@archlinux liburing]# time test/eventfd-reg 

real	0m7.980s
user	0m0.004s
sys	0m0.000s
[root@archlinux liburing]# time test/eventfd-reg 

real	0m8.197s
user	0m0.004s
sys	0m0.000s

which is around ~80ms for each register/unregister cycle, and here are
the results with this patchset:

[root@archlinux liburing]# time test/eventfd-reg

real	0m0.002s
user	0m0.001s
sys	0m0.000s
[root@archlinux liburing]# time test/eventfd-reg

real	0m0.001s
user	0m0.001s
sys	0m0.000s

which looks a lot more reasonable.

I'll look over this one and see if I've got anything to complain about,
just ran it first since I wrote the test anyway. Here's the test case,
btw:

https://git.kernel.dk/cgit/liburing/commit/?id=5bde26e4587168a439cabdbe73740454249e5204

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register
  2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
                   ` (5 preceding siblings ...)
  2022-02-04 15:53 ` [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Jens Axboe
@ 2022-02-04 16:19 ` Jens Axboe
  6 siblings, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-02-04 16:19 UTC (permalink / raw)
  To: io-uring, asml.silence, linux-kernel, Usama Arif; +Cc: fam.zheng

On Fri, 4 Feb 2022 14:51:12 +0000, Usama Arif wrote:
> Ring quiesce is currently used for registering/unregistering eventfds,
> registering restrictions and enabling rings.
> 
> For opcodes relating to registering/unregistering eventfds, ring quiesce
> can be avoided by creating a new RCU data structure (io_ev_fd) as part
> of io_ring_ctx that holds the eventfd_ctx, with reads to the structure
> protected by rcu_read_lock and writes (register/unregister calls)
> protected by a mutex.
> 
> [...]

Applied, thanks!

[1/5] io_uring: remove trace for eventfd
      commit: 054f8098d98be4c53ef317e9dd745bb5759f61d9
[2/5] io_uring: avoid ring quiesce while registering/unregistering eventfd
      commit: b77e315a96445e5f19a83546c73d2abbcedfa5db
[3/5] io_uring: avoid ring quiesce while registering async eventfd
      commit: 13bcfd43fd0ef5e0de306e6ffb566970499b6888
[4/5] io_uring: avoid ring quiesce while registering restrictions and enabling rings
      commit: 1769f1468f4697409ee44f494940b5381acc1bae
[5/5] io_uring: remove ring quiesce for io_uring_register
      commit: 971d72eb476604fc91a8e82f0421e6f599f9c300

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 5/5] io_uring: remove ring quiesce for io_uring_register
  2022-02-04 14:51 ` [PATCH v6 5/5] io_uring: remove ring quiesce for io_uring_register Usama Arif
@ 2022-07-15 15:44   ` Michal Koutný
  2022-07-15 16:00     ` Jens Axboe
  0 siblings, 1 reply; 13+ messages in thread
From: Michal Koutný @ 2022-07-15 15:44 UTC (permalink / raw)
  To: Usama Arif; +Cc: io-uring, axboe, asml.silence, linux-kernel, fam.zheng

Hello.

On Fri, Feb 04, 2022 at 02:51:17PM +0000, Usama Arif <usama.arif@bytedance.com> wrote:
> -	percpu_ref_resurrect(ref);
> [...]
> -		percpu_ref_reinit(&ctx->refs);

It seems to me that this patch could have also changed

--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1911,7 +1911,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
        ctx->dummy_ubuf->ubuf = -1UL;

        if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
-                           PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
+                           0, GFP_KERNEL))
                goto err;

        ctx->flags = p->flags;

Or are there any plans to still use the reinit/resurrect functionality
of the percpu counter?

Thanks,
Michal

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v6 5/5] io_uring: remove ring quiesce for io_uring_register
  2022-07-15 15:44   ` Michal Koutný
@ 2022-07-15 16:00     ` Jens Axboe
  2022-07-15 17:45       ` [PATCH] io_uring: Don't require reinitable percpu_ref Michal Koutný
  0 siblings, 1 reply; 13+ messages in thread
From: Jens Axboe @ 2022-07-15 16:00 UTC (permalink / raw)
  To: Michal Koutný, Usama Arif
  Cc: io-uring, asml.silence, linux-kernel, fam.zheng

On 7/15/22 9:44 AM, Michal Koutn? wrote:
> Hello.
> 
> On Fri, Feb 04, 2022 at 02:51:17PM +0000, Usama Arif <usama.arif@bytedance.com> wrote:
>> -	percpu_ref_resurrect(ref);
>> [...]
>> -		percpu_ref_reinit(&ctx->refs);
> 
> It seems to me that this patch could have also changed
> 
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -1911,7 +1911,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
>         ctx->dummy_ubuf->ubuf = -1UL;
> 
>         if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
> -                           PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
> +                           0, GFP_KERNEL))
>                 goto err;
> 
>         ctx->flags = p->flags;
> 
> Or are there any plans to still use the reinit/resurrect functionality
> of the percpu counter?

Ah yes indeed, good catch! Would you mind sending that as an actual
patch?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] io_uring: Don't require reinitable percpu_ref
  2022-07-15 16:00     ` Jens Axboe
@ 2022-07-15 17:45       ` Michal Koutný
  2022-07-15 17:54         ` Roman Gushchin
  2022-07-15 18:22         ` Jens Axboe
  0 siblings, 2 replies; 13+ messages in thread
From: Michal Koutný @ 2022-07-15 17:45 UTC (permalink / raw)
  To: axboe
  Cc: asml.silence, fam.zheng, io-uring, linux-kernel, roman.gushchin,
	usama.arif

The commit 8bb649ee1da3 ("io_uring: remove ring quiesce for
io_uring_register") removed the worklow relying on reinit/resurrection
of the percpu_ref, hence, initialization with that requested is a relic.

This is based on code review, this causes no real bug (and theoretically
can't). Technically it's a revert of commit 214828962dea ("io_uring:
initialize percpu refcounters using PERCU_REF_ALLOW_REINIT") but since
the flag omission is now justified, I'm not making this a revert.

Fixes: 8bb649ee1da3 ("io_uring: remove ring quiesce for io_uring_register")
Signed-off-by: Michal Koutný <mkoutny@suse.com>
---
 fs/io_uring.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index a01ea49f3017..563f2266c674 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1911,7 +1911,7 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
 	ctx->dummy_ubuf->ubuf = -1UL;
 
 	if (percpu_ref_init(&ctx->refs, io_ring_ctx_ref_free,
-			    PERCPU_REF_ALLOW_REINIT, GFP_KERNEL))
+			    0, GFP_KERNEL))
 		goto err;
 
 	ctx->flags = p->flags;
-- 
2.37.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] io_uring: Don't require reinitable percpu_ref
  2022-07-15 17:45       ` [PATCH] io_uring: Don't require reinitable percpu_ref Michal Koutný
@ 2022-07-15 17:54         ` Roman Gushchin
  2022-07-15 18:22         ` Jens Axboe
  1 sibling, 0 replies; 13+ messages in thread
From: Roman Gushchin @ 2022-07-15 17:54 UTC (permalink / raw)
  To: Michal Koutný
  Cc: axboe, asml.silence, fam.zheng, io-uring, linux-kernel, usama.arif

On Fri, Jul 15, 2022 at 07:45:01PM +0200, Michal Koutny wrote:
> The commit 8bb649ee1da3 ("io_uring: remove ring quiesce for
> io_uring_register") removed the worklow relying on reinit/resurrection
> of the percpu_ref, hence, initialization with that requested is a relic.
> 
> This is based on code review, this causes no real bug (and theoretically
> can't). Technically it's a revert of commit 214828962dea ("io_uring:
> initialize percpu refcounters using PERCU_REF_ALLOW_REINIT") but since
> the flag omission is now justified, I'm not making this a revert.
> 
> Fixes: 8bb649ee1da3 ("io_uring: remove ring quiesce for io_uring_register")
> Signed-off-by: Michal Koutný <mkoutny@suse.com>

Acked-by: Roman Gushchin <roman.gushchin@linux.dev>

Thanks!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] io_uring: Don't require reinitable percpu_ref
  2022-07-15 17:45       ` [PATCH] io_uring: Don't require reinitable percpu_ref Michal Koutný
  2022-07-15 17:54         ` Roman Gushchin
@ 2022-07-15 18:22         ` Jens Axboe
  1 sibling, 0 replies; 13+ messages in thread
From: Jens Axboe @ 2022-07-15 18:22 UTC (permalink / raw)
  To: Michal Koutný
  Cc: asml.silence, fam.zheng, io-uring, linux-kernel, roman.gushchin,
	usama.arif

On 7/15/22 11:45 AM, Michal Koutn? wrote:
> The commit 8bb649ee1da3 ("io_uring: remove ring quiesce for
> io_uring_register") removed the worklow relying on reinit/resurrection
> of the percpu_ref, hence, initialization with that requested is a relic.
> 
> This is based on code review, this causes no real bug (and theoretically
> can't). Technically it's a revert of commit 214828962dea ("io_uring:
> initialize percpu refcounters using PERCU_REF_ALLOW_REINIT") but since
> the flag omission is now justified, I'm not making this a revert.

Thanks, applied manually for 5.20 (new file location).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-07-15 18:22 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-04 14:51 [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Usama Arif
2022-02-04 14:51 ` [PATCH v6 1/5] io_uring: remove trace for eventfd Usama Arif
2022-02-04 14:51 ` [PATCH v6 2/5] io_uring: avoid ring quiesce while registering/unregistering eventfd Usama Arif
2022-02-04 14:51 ` [PATCH v6 3/5] io_uring: avoid ring quiesce while registering async eventfd Usama Arif
2022-02-04 14:51 ` [PATCH v6 4/5] io_uring: avoid ring quiesce while registering restrictions and enabling rings Usama Arif
2022-02-04 14:51 ` [PATCH v6 5/5] io_uring: remove ring quiesce for io_uring_register Usama Arif
2022-07-15 15:44   ` Michal Koutný
2022-07-15 16:00     ` Jens Axboe
2022-07-15 17:45       ` [PATCH] io_uring: Don't require reinitable percpu_ref Michal Koutný
2022-07-15 17:54         ` Roman Gushchin
2022-07-15 18:22         ` Jens Axboe
2022-02-04 15:53 ` [PATCH v6 0/5] io_uring: remove ring quiesce in io_uring_register Jens Axboe
2022-02-04 16:19 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.