All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15
@ 2022-12-02 14:27 Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 1/5] io_uring: update res mask in io_poll_check_events Pavel Begunkov
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 14:27 UTC (permalink / raw)
  To: stable; +Cc: Jens Axboe, asml.silence

Recent patches for io_uring polling.

Lin Ma (1):
  io_uring/poll: fix poll_refs race with cancelation

Pavel Begunkov (4):
  io_uring: update res mask in io_poll_check_events
  io_uring: fix tw losing poll events
  io_uring: cmpxchg for poll arm refs release
  io_uring: make poll refs more robust

 fs/io_uring.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 50 insertions(+), 7 deletions(-)

-- 
2.38.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH stable-5.15 1/5] io_uring: update res mask in io_poll_check_events
  2022-12-02 14:27 [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Pavel Begunkov
@ 2022-12-02 14:27 ` Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 2/5] io_uring: fix tw losing poll events Pavel Begunkov
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 14:27 UTC (permalink / raw)
  To: stable; +Cc: Jens Axboe, asml.silence

[ upstream commit b98186aee22fa593bc8c6b2c5d839c2ee518bc8c ]

When io_poll_check_events() collides with someone attempting to queue a
task work, it'll spin for one more time. However, it'll continue to use
the mask from the first iteration instead of updating it. For example,
if the first wake up was a EPOLLIN and the second EPOLLOUT, the
userspace will not get EPOLLOUT in time.

Clear the mask for all subsequent iterations to force vfs_poll().

Cc: stable@vger.kernel.org
Fixes: aa43477b04025 ("io_uring: poll rework")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/2dac97e8f691231049cb259c4ae57e79e40b537c.1668710222.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index b8ae64df90e3..2ba42e6e0881 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5464,6 +5464,9 @@ static int io_poll_check_events(struct io_kiocb *req)
 			return 0;
 		}
 
+		/* force the next iteration to vfs_poll() */
+		req->result = 0;
+
 		/*
 		 * Release all references, retry if someone tried to restart
 		 * task_work while we were executing it.
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH stable-5.15 2/5] io_uring: fix tw losing poll events
  2022-12-02 14:27 [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 1/5] io_uring: update res mask in io_poll_check_events Pavel Begunkov
@ 2022-12-02 14:27 ` Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 3/5] io_uring: cmpxchg for poll arm refs release Pavel Begunkov
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 14:27 UTC (permalink / raw)
  To: stable; +Cc: Jens Axboe, asml.silence

[ upstream commit 539bcb57da2f58886d7d5c17134236b0ec9cd15d ]

We may never try to process a poll wake and its mask if there was
multiple wake ups racing for queueing up a tw. Force
io_poll_check_events() to update the mask by vfs_poll().

Cc: stable@vger.kernel.org
Fixes: aa43477b04025 ("io_uring: poll rework")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/00344d60f8b18907171178d7cf598de71d127b0b.1668710222.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 2ba42e6e0881..62e0d352c78e 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5440,6 +5440,13 @@ static int io_poll_check_events(struct io_kiocb *req)
 			return 0;
 		if (v & IO_POLL_CANCEL_FLAG)
 			return -ECANCELED;
+		/*
+		 * cqe.res contains only events of the first wake up
+		 * and all others are be lost. Redo vfs_poll() to get
+		 * up to date state.
+		 */
+		if ((v & IO_POLL_REF_MASK) != 1)
+			req->result = 0;
 
 		if (!req->result) {
 			struct poll_table_struct pt = { ._key = poll->events };
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH stable-5.15 3/5] io_uring: cmpxchg for poll arm refs release
  2022-12-02 14:27 [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 1/5] io_uring: update res mask in io_poll_check_events Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 2/5] io_uring: fix tw losing poll events Pavel Begunkov
@ 2022-12-02 14:27 ` Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 4/5] io_uring: make poll refs more robust Pavel Begunkov
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 14:27 UTC (permalink / raw)
  To: stable; +Cc: Jens Axboe, asml.silence

[ upstream commit 2f3893437a4ebf2e892ca172e9e122841319d675 ]

Replace atomically substracting the ownership reference at the end of
arming a poll with a cmpxchg. We try to release ownership by setting 0
assuming that poll_refs didn't change while we were arming. If it did
change, we keep the ownership and use it to queue a tw, which is fully
capable to process all events and (even tolerates spurious wake ups).

It's a bit more elegant as we reduce races b/w setting the cancellation
flag and getting refs with this release, and with that we don't have to
worry about any kinds of underflows. It's not the fastest path for
polling. The performance difference b/w cmpxchg and atomic dec is
usually negligible and it's not the fastest path.

Cc: stable@vger.kernel.org
Fixes: aa43477b04025 ("io_uring: poll rework")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0c95251624397ea6def568ff040cad2d7926fd51.1668963050.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 62e0d352c78e..0e3fc80fee05 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5650,7 +5650,6 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
 				 struct io_poll_table *ipt, __poll_t mask)
 {
 	struct io_ring_ctx *ctx = req->ctx;
-	int v;
 
 	INIT_HLIST_NODE(&req->hash_node);
 	io_init_poll_iocb(poll, mask, io_poll_wake);
@@ -5696,11 +5695,10 @@ static int __io_arm_poll_handler(struct io_kiocb *req,
 	}
 
 	/*
-	 * Release ownership. If someone tried to queue a tw while it was
-	 * locked, kick it off for them.
+	 * Try to release ownership. If we see a change of state, e.g.
+	 * poll was waken up, queue up a tw, it'll deal with it.
 	 */
-	v = atomic_dec_return(&req->poll_refs);
-	if (unlikely(v & IO_POLL_REF_MASK))
+	if (atomic_cmpxchg(&req->poll_refs, 1, 0) != 1)
 		__io_poll_execute(req, 0);
 	return 0;
 }
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH stable-5.15 4/5] io_uring: make poll refs more robust
  2022-12-02 14:27 [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Pavel Begunkov
                   ` (2 preceding siblings ...)
  2022-12-02 14:27 ` [PATCH stable-5.15 3/5] io_uring: cmpxchg for poll arm refs release Pavel Begunkov
@ 2022-12-02 14:27 ` Pavel Begunkov
  2022-12-02 14:27 ` [PATCH stable-5.15 5/5] io_uring/poll: fix poll_refs race with cancelation Pavel Begunkov
  2022-12-03 13:19 ` [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Greg KH
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 14:27 UTC (permalink / raw)
  To: stable; +Cc: Jens Axboe, asml.silence, Lin Ma

[ upstream commit a26a35e9019fd70bf3cf647dcfdae87abc7bacea ]

poll_refs carry two functions, the first is ownership over the request.
The second is notifying the io_poll_check_events() that there was an
event but wake up couldn't grab the ownership, so io_poll_check_events()
should retry.

We want to make poll_refs more robust against overflows. Instead of
always incrementing it, which covers two purposes with one atomic, check
if poll_refs is elevated enough and if so set a retry flag without
attempts to grab ownership. The gap between the bias check and following
atomics may seem racy, but we don't need it to be strict. Moreover there
might only be maximum 4 parallel updates: by the first and the second
poll entries, __io_arm_poll_handler() and cancellation. From those four,
only poll wake ups may be executed multiple times, but they're protected
by a spin.

Cc: stable@vger.kernel.org
Reported-by: Lin Ma <linma@zju.edu.cn>
Fixes: aa43477b04025 ("io_uring: poll rework")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/c762bc31f8683b3270f3587691348a7119ef9c9d.1668963050.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/io_uring.c | 36 +++++++++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 0e3fc80fee05..11dcad170594 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5322,7 +5322,29 @@ struct io_poll_table {
 };
 
 #define IO_POLL_CANCEL_FLAG	BIT(31)
-#define IO_POLL_REF_MASK	GENMASK(30, 0)
+#define IO_POLL_RETRY_FLAG	BIT(30)
+#define IO_POLL_REF_MASK	GENMASK(29, 0)
+
+/*
+ * We usually have 1-2 refs taken, 128 is more than enough and we want to
+ * maximise the margin between this amount and the moment when it overflows.
+ */
+#define IO_POLL_REF_BIAS       128
+
+static bool io_poll_get_ownership_slowpath(struct io_kiocb *req)
+{
+	int v;
+
+	/*
+	 * poll_refs are already elevated and we don't have much hope for
+	 * grabbing the ownership. Instead of incrementing set a retry flag
+	 * to notify the loop that there might have been some change.
+	 */
+	v = atomic_fetch_or(IO_POLL_RETRY_FLAG, &req->poll_refs);
+	if (v & IO_POLL_REF_MASK)
+		return false;
+	return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
+}
 
 /*
  * If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can
@@ -5332,6 +5354,8 @@ struct io_poll_table {
  */
 static inline bool io_poll_get_ownership(struct io_kiocb *req)
 {
+	if (unlikely(atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS))
+		return io_poll_get_ownership_slowpath(req);
 	return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
 }
 
@@ -5447,6 +5471,16 @@ static int io_poll_check_events(struct io_kiocb *req)
 		 */
 		if ((v & IO_POLL_REF_MASK) != 1)
 			req->result = 0;
+		if (v & IO_POLL_RETRY_FLAG) {
+			req->result = 0;
+			/*
+			 * We won't find new events that came in between
+			 * vfs_poll and the ref put unless we clear the
+			 * flag in advance.
+			 */
+			atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs);
+			v &= ~IO_POLL_RETRY_FLAG;
+		}
 
 		if (!req->result) {
 			struct poll_table_struct pt = { ._key = poll->events };
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH stable-5.15 5/5] io_uring/poll: fix poll_refs race with cancelation
  2022-12-02 14:27 [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Pavel Begunkov
                   ` (3 preceding siblings ...)
  2022-12-02 14:27 ` [PATCH stable-5.15 4/5] io_uring: make poll refs more robust Pavel Begunkov
@ 2022-12-02 14:27 ` Pavel Begunkov
  2022-12-03 13:19 ` [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Greg KH
  5 siblings, 0 replies; 7+ messages in thread
From: Pavel Begunkov @ 2022-12-02 14:27 UTC (permalink / raw)
  To: stable; +Cc: Jens Axboe, asml.silence, Lin Ma

From: Lin Ma <linma@zju.edu.cn>

[ upstream commit 12ad3d2d6c5b0131a6052de91360849e3e154846 ]

There is an interesting race condition of poll_refs which could result
in a NULL pointer dereference. The crash trace is like:

KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f]
CPU: 0 PID: 30781 Comm: syz-executor.2 Not tainted 6.0.0-g493ffd6605b2 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
1.13.0-1ubuntu1.1 04/01/2014
RIP: 0010:io_poll_remove_entry io_uring/poll.c:154 [inline]
RIP: 0010:io_poll_remove_entries+0x171/0x5b4 io_uring/poll.c:190
Code: ...
RSP: 0018:ffff88810dfefba0 EFLAGS: 00010202
RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000040000
RDX: ffffc900030c4000 RSI: 000000000003ffff RDI: 0000000000040000
RBP: 0000000000000008 R08: ffffffff9764d3dd R09: fffffbfff3836781
R10: fffffbfff3836781 R11: 0000000000000000 R12: 1ffff11003422d60
R13: ffff88801a116b04 R14: ffff88801a116ac0 R15: dffffc0000000000
FS:  00007f9c07497700(0000) GS:ffff88811a600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffb5c00ea98 CR3: 0000000105680005 CR4: 0000000000770ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
PKRU: 55555554
Call Trace:
 <TASK>
 io_apoll_task_func+0x3f/0xa0 io_uring/poll.c:299
 handle_tw_list io_uring/io_uring.c:1037 [inline]
 tctx_task_work+0x37e/0x4f0 io_uring/io_uring.c:1090
 task_work_run+0x13a/0x1b0 kernel/task_work.c:177
 get_signal+0x2402/0x25a0 kernel/signal.c:2635
 arch_do_signal_or_restart+0x3b/0x660 arch/x86/kernel/signal.c:869
 exit_to_user_mode_loop kernel/entry/common.c:166 [inline]
 exit_to_user_mode_prepare+0xc2/0x160 kernel/entry/common.c:201
 __syscall_exit_to_user_mode_work kernel/entry/common.c:283 [inline]
 syscall_exit_to_user_mode+0x58/0x160 kernel/entry/common.c:294
 entry_SYSCALL_64_after_hwframe+0x63/0xcd

The root cause for this is a tiny overlooking in
io_poll_check_events() when cocurrently run with poll cancel routine
io_poll_cancel_req().

The interleaving to trigger use-after-free:

CPU0                                       |  CPU1
                                           |
io_apoll_task_func()                       |  io_poll_cancel_req()
 io_poll_check_events()                    |
  // do while first loop                   |
  v = atomic_read(...)                     |
  // v = poll_refs = 1                     |
  ...                                      |  io_poll_mark_cancelled()
                                           |   atomic_or()
                                           |   // poll_refs =
IO_POLL_CANCEL_FLAG | 1
                                           |
  atomic_sub_return(...)                   |
  // poll_refs = IO_POLL_CANCEL_FLAG       |
  // loop continue                         |
                                           |
                                           |  io_poll_execute()
                                           |   io_poll_get_ownership()
                                           |   // poll_refs =
IO_POLL_CANCEL_FLAG | 1
                                           |   // gets the ownership
  v = atomic_read(...)                     |
  // poll_refs not change                  |
                                           |
  if (v & IO_POLL_CANCEL_FLAG)             |
   return -ECANCELED;                      |
  // io_poll_check_events return           |
  // will go into                          |
  // io_req_complete_failed() free req     |
                                           |
                                           |  io_apoll_task_func()
                                           |  // also go into
io_req_complete_failed()

And the interleaving to trigger the kernel WARNING:

CPU0                                       |  CPU1
                                           |
io_apoll_task_func()                       |  io_poll_cancel_req()
 io_poll_check_events()                    |
  // do while first loop                   |
  v = atomic_read(...)                     |
  // v = poll_refs = 1                     |
  ...                                      |  io_poll_mark_cancelled()
                                           |   atomic_or()
                                           |   // poll_refs =
IO_POLL_CANCEL_FLAG | 1
                                           |
  atomic_sub_return(...)                   |
  // poll_refs = IO_POLL_CANCEL_FLAG       |
  // loop continue                         |
                                           |
  v = atomic_read(...)                     |
  // v = IO_POLL_CANCEL_FLAG               |
                                           |  io_poll_execute()
                                           |   io_poll_get_ownership()
                                           |   // poll_refs =
IO_POLL_CANCEL_FLAG | 1
                                           |   // gets the ownership
                                           |
  WARN_ON_ONCE(!(v & IO_POLL_REF_MASK)))   |
  // v & IO_POLL_REF_MASK = 0 WARN         |
                                           |
                                           |  io_apoll_task_func()
                                           |  // also go into
io_req_complete_failed()

By looking up the source code and communicating with Pavel, the
implementation of this atomic poll refs should continue the loop of
io_poll_check_events() just to avoid somewhere else to grab the
ownership. Therefore, this patch simply adds another AND operation to
make sure the loop will stop if it finds the poll_refs is exactly equal
to IO_POLL_CANCEL_FLAG. Since io_poll_cancel_req() grabs ownership and
will finally make its way to io_req_complete_failed(), the req will
be reclaimed as expected.

Fixes: aa43477b0402 ("io_uring: poll rework")
Signed-off-by: Lin Ma <linma@zju.edu.cn>
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
[axboe: tweak description and code style]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
---
 fs/io_uring.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 11dcad170594..c2fdde6fdda3 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -5512,7 +5512,8 @@ static int io_poll_check_events(struct io_kiocb *req)
 		 * Release all references, retry if someone tried to restart
 		 * task_work while we were executing it.
 		 */
-	} while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs));
+	} while (atomic_sub_return(v & IO_POLL_REF_MASK, &req->poll_refs) &
+					IO_POLL_REF_MASK);
 
 	return 1;
 }
-- 
2.38.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15
  2022-12-02 14:27 [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Pavel Begunkov
                   ` (4 preceding siblings ...)
  2022-12-02 14:27 ` [PATCH stable-5.15 5/5] io_uring/poll: fix poll_refs race with cancelation Pavel Begunkov
@ 2022-12-03 13:19 ` Greg KH
  5 siblings, 0 replies; 7+ messages in thread
From: Greg KH @ 2022-12-03 13:19 UTC (permalink / raw)
  To: Pavel Begunkov; +Cc: stable, Jens Axboe

On Fri, Dec 02, 2022 at 02:27:10PM +0000, Pavel Begunkov wrote:
> Recent patches for io_uring polling.
> 
> Lin Ma (1):
>   io_uring/poll: fix poll_refs race with cancelation
> 
> Pavel Begunkov (4):
>   io_uring: update res mask in io_poll_check_events
>   io_uring: fix tw losing poll events
>   io_uring: cmpxchg for poll arm refs release
>   io_uring: make poll refs more robust
> 
>  fs/io_uring.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------
>  1 file changed, 50 insertions(+), 7 deletions(-)

All now queued up, thanks

greg k-h

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-12-03 13:19 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-02 14:27 [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Pavel Begunkov
2022-12-02 14:27 ` [PATCH stable-5.15 1/5] io_uring: update res mask in io_poll_check_events Pavel Begunkov
2022-12-02 14:27 ` [PATCH stable-5.15 2/5] io_uring: fix tw losing poll events Pavel Begunkov
2022-12-02 14:27 ` [PATCH stable-5.15 3/5] io_uring: cmpxchg for poll arm refs release Pavel Begunkov
2022-12-02 14:27 ` [PATCH stable-5.15 4/5] io_uring: make poll refs more robust Pavel Begunkov
2022-12-02 14:27 ` [PATCH stable-5.15 5/5] io_uring/poll: fix poll_refs race with cancelation Pavel Begunkov
2022-12-03 13:19 ` [PATCH stable-5.15 0/5] io_uring/poll backports for 5.15 Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.