io-uring.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-5.10 0/7] another pack of random cleanups
@ 2020-10-18  9:17 Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 1/7] io_uring: flags-based creds init in queue Pavel Begunkov
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Nothing particularly interesting, just flushing patches that are
simple and straightforward.

Pavel Begunkov (7):
  io_uring: flags-based creds init in queue
  io_uring: kill ref get/drop in personality init
  io_uring: inline io_fail_links()
  io_uring: make cached_cq_overflow non atomic_t
  io_uring: remove extra ->file check in poll prep
  io_uring: inline io_poll_task_handler()
  io_uring: do poll's hash_node init in common code

 fs/io_uring.c | 77 ++++++++++++++++++++++-----------------------------
 1 file changed, 33 insertions(+), 44 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/7] io_uring: flags-based creds init in queue
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
@ 2020-10-18  9:17 ` Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 2/7] io_uring: kill ref get/drop in personality init Pavel Begunkov
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Use IO_WQ_WORK_CREDS to figure out if req has creds to be used.
Since recently it should rely only on flags, but not value of
work.creds.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 5651b9d701e0..fd2fc72c312c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -6184,7 +6184,8 @@ static void __io_queue_sqe(struct io_kiocb *req, struct io_comp_state *cs)
 again:
 	linked_timeout = io_prep_linked_timeout(req);
 
-	if ((req->flags & REQ_F_WORK_INITIALIZED) && req->work.identity->creds &&
+	if ((req->flags & REQ_F_WORK_INITIALIZED) &&
+	    (req->work.flags & IO_WQ_WORK_CREDS) &&
 	    req->work.identity->creds != current_cred()) {
 		if (old_creds)
 			revert_creds(old_creds);
@@ -6192,7 +6193,6 @@ static void __io_queue_sqe(struct io_kiocb *req, struct io_comp_state *cs)
 			old_creds = NULL; /* restored original creds */
 		else
 			old_creds = override_creds(req->work.identity->creds);
-		req->work.flags |= IO_WQ_WORK_CREDS;
 	}
 
 	ret = io_issue_sqe(req, true, cs);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/7] io_uring: kill ref get/drop in personality init
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 1/7] io_uring: flags-based creds init in queue Pavel Begunkov
@ 2020-10-18  9:17 ` Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 3/7] io_uring: inline io_fail_links() Pavel Begunkov
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Don't take an identity on personality/creds init only to drop it a few
lines after. Extract a function which prepares req->work but leaves it
without identity.

Note: it's safe to not check REQ_F_WORK_INITIALIZED there because it's
nobody had a chance to init it before io_init_req().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index fd2fc72c312c..048db9d3002c 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1070,6 +1070,12 @@ static void io_init_identity(struct io_identity *id)
 	refcount_set(&id->count, 1);
 }
 
+static inline void __io_req_init_async(struct io_kiocb *req)
+{
+	memset(&req->work, 0, sizeof(req->work));
+	req->flags |= REQ_F_WORK_INITIALIZED;
+}
+
 /*
  * Note: must call io_req_init_async() for the first time you
  * touch any members of io_wq_work.
@@ -1081,8 +1087,7 @@ static inline void io_req_init_async(struct io_kiocb *req)
 	if (req->flags & REQ_F_WORK_INITIALIZED)
 		return;
 
-	memset(&req->work, 0, sizeof(req->work));
-	req->flags |= REQ_F_WORK_INITIALIZED;
+	__io_req_init_async(req);
 
 	/* Grab a ref if this isn't our static identity */
 	req->work.identity = tctx->identity;
@@ -6497,12 +6502,12 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
 	if (id) {
 		struct io_identity *iod;
 
-		io_req_init_async(req);
 		iod = idr_find(&ctx->personality_idr, id);
 		if (unlikely(!iod))
 			return -EINVAL;
 		refcount_inc(&iod->count);
-		io_put_identity(current->io_uring, req);
+
+		__io_req_init_async(req);
 		get_cred(iod->creds);
 		req->work.identity = iod;
 		req->work.flags |= IO_WQ_WORK_CREDS;
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/7] io_uring: inline io_fail_links()
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 1/7] io_uring: flags-based creds init in queue Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 2/7] io_uring: kill ref get/drop in personality init Pavel Begunkov
@ 2020-10-18  9:17 ` Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 4/7] io_uring: make cached_cq_overflow non atomic_t Pavel Begunkov
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Inline io_fail_links() and kill extra io_cqring_ev_posted().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 048db9d3002c..43c92a3088d8 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -1913,10 +1913,12 @@ static struct io_kiocb *io_req_link_next(struct io_kiocb *req)
 /*
  * Called if REQ_F_LINK_HEAD is set, and we fail the head request
  */
-static void __io_fail_links(struct io_kiocb *req)
+static void io_fail_links(struct io_kiocb *req)
 {
 	struct io_ring_ctx *ctx = req->ctx;
+	unsigned long flags;
 
+	spin_lock_irqsave(&ctx->completion_lock, flags);
 	while (!list_empty(&req->link_list)) {
 		struct io_kiocb *link = list_first_entry(&req->link_list,
 						struct io_kiocb, link_list);
@@ -1938,15 +1940,6 @@ static void __io_fail_links(struct io_kiocb *req)
 	}
 
 	io_commit_cqring(ctx);
-}
-
-static void io_fail_links(struct io_kiocb *req)
-{
-	struct io_ring_ctx *ctx = req->ctx;
-	unsigned long flags;
-
-	spin_lock_irqsave(&ctx->completion_lock, flags);
-	__io_fail_links(req);
 	spin_unlock_irqrestore(&ctx->completion_lock, flags);
 
 	io_cqring_ev_posted(ctx);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/7] io_uring: make cached_cq_overflow non atomic_t
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
                   ` (2 preceding siblings ...)
  2020-10-18  9:17 ` [PATCH 3/7] io_uring: inline io_fail_links() Pavel Begunkov
@ 2020-10-18  9:17 ` Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 5/7] io_uring: remove extra ->file check in poll prep Pavel Begunkov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

ctx->cached_cq_overflow is changed only under completion_lock. Convert
it from atomic_t to just int, and mark all places when it's read without
lock with READ_ONCE, which guarantees atomicity (relaxed ordering).

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 43c92a3088d8..c7ccd2500597 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -277,7 +277,7 @@ struct io_ring_ctx {
 		unsigned		sq_mask;
 		unsigned		sq_thread_idle;
 		unsigned		cached_sq_dropped;
-		atomic_t		cached_cq_overflow;
+		unsigned		cached_cq_overflow;
 		unsigned long		sq_check_overflow;
 
 		struct list_head	defer_list;
@@ -1179,7 +1179,7 @@ static bool req_need_defer(struct io_kiocb *req, u32 seq)
 		struct io_ring_ctx *ctx = req->ctx;
 
 		return seq != ctx->cached_cq_tail
-				+ atomic_read(&ctx->cached_cq_overflow);
+				+ READ_ONCE(ctx->cached_cq_overflow);
 	}
 
 	return false;
@@ -1624,8 +1624,9 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force,
 			WRITE_ONCE(cqe->res, req->result);
 			WRITE_ONCE(cqe->flags, req->compl.cflags);
 		} else {
+			ctx->cached_cq_overflow++;
 			WRITE_ONCE(ctx->rings->cq_overflow,
-				atomic_inc_return(&ctx->cached_cq_overflow));
+				   ctx->cached_cq_overflow);
 		}
 	}
 
@@ -1667,8 +1668,8 @@ static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags)
 		 * then we cannot store the request for later flushing, we need
 		 * to drop it on the floor.
 		 */
-		WRITE_ONCE(ctx->rings->cq_overflow,
-				atomic_inc_return(&ctx->cached_cq_overflow));
+		ctx->cached_cq_overflow++;
+		WRITE_ONCE(ctx->rings->cq_overflow, ctx->cached_cq_overflow);
 	} else {
 		if (list_empty(&ctx->cq_overflow_list)) {
 			set_bit(0, &ctx->sq_check_overflow);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/7] io_uring: remove extra ->file check in poll prep
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
                   ` (3 preceding siblings ...)
  2020-10-18  9:17 ` [PATCH 4/7] io_uring: make cached_cq_overflow non atomic_t Pavel Begunkov
@ 2020-10-18  9:17 ` Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 6/7] io_uring: inline io_poll_task_handler() Pavel Begunkov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

io_poll_add_prep() doesn't need to verify ->file because it's already
done in io_init_req().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index c7ccd2500597..70f4f1ce3011 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5340,8 +5340,6 @@ static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
 		return -EINVAL;
 	if (sqe->addr || sqe->ioprio || sqe->off || sqe->len || sqe->buf_index)
 		return -EINVAL;
-	if (!poll->file)
-		return -EBADF;
 
 	events = READ_ONCE(sqe->poll32_events);
 #ifdef __BIG_ENDIAN
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 6/7] io_uring: inline io_poll_task_handler()
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
                   ` (4 preceding siblings ...)
  2020-10-18  9:17 ` [PATCH 5/7] io_uring: remove extra ->file check in poll prep Pavel Begunkov
@ 2020-10-18  9:17 ` Pavel Begunkov
  2020-10-18  9:17 ` [PATCH 7/7] io_uring: do poll's hash_node init in common code Pavel Begunkov
  2020-10-20 14:10 ` [PATCH for-5.10 0/7] another pack of random cleanups Jens Axboe
  7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

io_poll_task_handler() doesn't add clarity, inline it in its only user.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 31 ++++++++++++-------------------
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 70f4f1ce3011..81b0b38ee506 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -4924,32 +4924,25 @@ static void io_poll_complete(struct io_kiocb *req, __poll_t mask, int error)
 	io_commit_cqring(ctx);
 }
 
-static void io_poll_task_handler(struct io_kiocb *req, struct io_kiocb **nxt)
+static void io_poll_task_func(struct callback_head *cb)
 {
+	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
 	struct io_ring_ctx *ctx = req->ctx;
+	struct io_kiocb *nxt;
 
 	if (io_poll_rewait(req, &req->poll)) {
 		spin_unlock_irq(&ctx->completion_lock);
-		return;
-	}
-
-	hash_del(&req->hash_node);
-	io_poll_complete(req, req->result, 0);
-	spin_unlock_irq(&ctx->completion_lock);
-
-	*nxt = io_put_req_find_next(req);
-	io_cqring_ev_posted(ctx);
-}
+	} else {
+		hash_del(&req->hash_node);
+		io_poll_complete(req, req->result, 0);
+		spin_unlock_irq(&ctx->completion_lock);
 
-static void io_poll_task_func(struct callback_head *cb)
-{
-	struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
-	struct io_ring_ctx *ctx = req->ctx;
-	struct io_kiocb *nxt = NULL;
+		nxt = io_put_req_find_next(req);
+		io_cqring_ev_posted(ctx);
+		if (nxt)
+			__io_req_task_submit(nxt);
+	}
 
-	io_poll_task_handler(req, &nxt);
-	if (nxt)
-		__io_req_task_submit(nxt);
 	percpu_ref_put(&ctx->refs);
 }
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 7/7] io_uring: do poll's hash_node init in common code
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
                   ` (5 preceding siblings ...)
  2020-10-18  9:17 ` [PATCH 6/7] io_uring: inline io_poll_task_handler() Pavel Begunkov
@ 2020-10-18  9:17 ` Pavel Begunkov
  2020-10-20 14:10 ` [PATCH for-5.10 0/7] another pack of random cleanups Jens Axboe
  7 siblings, 0 replies; 9+ messages in thread
From: Pavel Begunkov @ 2020-10-18  9:17 UTC (permalink / raw)
  To: Jens Axboe, io-uring

Move INIT_HLIST_NODE(&req->hash_node) into __io_arm_poll_handler(), so
that it doesn't duplicated and common poll code would be responsible for
it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 81b0b38ee506..95d2bb7069c6 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5096,6 +5096,7 @@ static __poll_t __io_arm_poll_handler(struct io_kiocb *req,
 	struct io_ring_ctx *ctx = req->ctx;
 	bool cancel = false;
 
+	INIT_HLIST_NODE(&req->hash_node);
 	io_init_poll_iocb(poll, mask, wake_func);
 	poll->file = req->file;
 	poll->wait.private = req;
@@ -5157,7 +5158,6 @@ static bool io_arm_poll_handler(struct io_kiocb *req)
 
 	req->flags |= REQ_F_POLLED;
 	req->apoll = apoll;
-	INIT_HLIST_NODE(&req->hash_node);
 
 	mask = 0;
 	if (def->pollin)
@@ -5350,7 +5350,6 @@ static int io_poll_add(struct io_kiocb *req)
 	struct io_poll_table ipt;
 	__poll_t mask;
 
-	INIT_HLIST_NODE(&req->hash_node);
 	ipt.pt._qproc = io_poll_queue_proc;
 
 	mask = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events,
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH for-5.10 0/7] another pack of random cleanups
  2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
                   ` (6 preceding siblings ...)
  2020-10-18  9:17 ` [PATCH 7/7] io_uring: do poll's hash_node init in common code Pavel Begunkov
@ 2020-10-20 14:10 ` Jens Axboe
  7 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2020-10-20 14:10 UTC (permalink / raw)
  To: Pavel Begunkov, io-uring

On 10/18/20 3:17 AM, Pavel Begunkov wrote:
> Nothing particularly interesting, just flushing patches that are
> simple and straightforward.

LGTM, applied.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-10-20 14:10 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-18  9:17 [PATCH for-5.10 0/7] another pack of random cleanups Pavel Begunkov
2020-10-18  9:17 ` [PATCH 1/7] io_uring: flags-based creds init in queue Pavel Begunkov
2020-10-18  9:17 ` [PATCH 2/7] io_uring: kill ref get/drop in personality init Pavel Begunkov
2020-10-18  9:17 ` [PATCH 3/7] io_uring: inline io_fail_links() Pavel Begunkov
2020-10-18  9:17 ` [PATCH 4/7] io_uring: make cached_cq_overflow non atomic_t Pavel Begunkov
2020-10-18  9:17 ` [PATCH 5/7] io_uring: remove extra ->file check in poll prep Pavel Begunkov
2020-10-18  9:17 ` [PATCH 6/7] io_uring: inline io_poll_task_handler() Pavel Begunkov
2020-10-18  9:17 ` [PATCH 7/7] io_uring: do poll's hash_node init in common code Pavel Begunkov
2020-10-20 14:10 ` [PATCH for-5.10 0/7] another pack of random cleanups Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).