All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] io_uring: fix NULL pointer dereference for async cancel close
@ 2021-01-18  9:50 Joseph Qi
  2021-01-18 12:23 ` Pavel Begunkov
  2021-01-19 18:01 ` Jens Axboe
  0 siblings, 2 replies; 12+ messages in thread
From: Joseph Qi @ 2021-01-18  9:50 UTC (permalink / raw)
  To: Jens Axboe; +Cc: io-uring, Xiaoguang Wang

Abaci reported the following crash:

[   31.252589] BUG: kernel NULL pointer dereference, address: 00000000000000d8
[   31.253942] #PF: supervisor read access in kernel mode
[   31.254945] #PF: error_code(0x0000) - not-present page
[   31.255964] PGD 800000010b76f067 P4D 800000010b76f067 PUD 10b462067 PMD 0
[   31.257221] Oops: 0000 [#1] SMP PTI
[   31.257923] CPU: 1 PID: 1788 Comm: io_uring-sq Not tainted 5.11.0-rc4 #1
[   31.259175] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[   31.260232] RIP: 0010:__lock_acquire+0x19d/0x18c0
[   31.261144] Code: 00 00 8b 1d fd 56 dd 08 85 db 0f 85 43 05 00 00 48 c7 c6 98 7b 95 82 48 c7 c7 57 96 93 82 e8 9a bc f5 ff 0f 0b e9 2b 05 00 00 <48> 81 3f c0 ca 67 8a b8 00 00 00 00 41 0f 45 c0 89 04 24 e9 81 fe
[   31.264297] RSP: 0018:ffffc90001933828 EFLAGS: 00010002
[   31.265320] RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
[   31.266594] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000000d8
[   31.267922] RBP: 0000000000000246 R08: 0000000000000001 R09: 0000000000000000
[   31.269262] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[   31.270550] R13: 0000000000000000 R14: ffff888106e8a140 R15: 00000000000000d8
[   31.271760] FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
[   31.273269] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   31.274330] CR2: 00000000000000d8 CR3: 0000000106efa004 CR4: 00000000003706e0
[   31.275613] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   31.276855] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   31.278065] Call Trace:
[   31.278649]  lock_acquire+0x31a/0x440
[   31.279404]  ? close_fd_get_file+0x39/0x160
[   31.280276]  ? __lock_acquire+0x647/0x18c0
[   31.281112]  _raw_spin_lock+0x2c/0x40
[   31.281821]  ? close_fd_get_file+0x39/0x160
[   31.282586]  close_fd_get_file+0x39/0x160
[   31.283338]  io_issue_sqe+0x1334/0x14e0
[   31.284053]  ? lock_acquire+0x31a/0x440
[   31.284763]  ? __io_free_req+0xcf/0x2e0
[   31.285504]  ? __io_free_req+0x175/0x2e0
[   31.286247]  ? find_held_lock+0x28/0xb0
[   31.286968]  ? io_wq_submit_work+0x7f/0x240
[   31.287733]  io_wq_submit_work+0x7f/0x240
[   31.288486]  io_wq_cancel_cb+0x161/0x580
[   31.289230]  ? io_wqe_wake_worker+0x114/0x360
[   31.290020]  ? io_uring_get_socket+0x40/0x40
[   31.290832]  io_async_find_and_cancel+0x3b/0x140
[   31.291676]  io_issue_sqe+0xbe1/0x14e0
[   31.292405]  ? __lock_acquire+0x647/0x18c0
[   31.293207]  ? __io_queue_sqe+0x10b/0x5f0
[   31.293986]  __io_queue_sqe+0x10b/0x5f0
[   31.294747]  ? io_req_prep+0xdb/0x1150
[   31.295485]  ? mark_held_locks+0x6d/0xb0
[   31.296252]  ? mark_held_locks+0x6d/0xb0
[   31.297019]  ? io_queue_sqe+0x235/0x4b0
[   31.297774]  io_queue_sqe+0x235/0x4b0
[   31.298496]  io_submit_sqes+0xd7e/0x12a0
[   31.299275]  ? _raw_spin_unlock_irq+0x24/0x30
[   31.300121]  ? io_sq_thread+0x3ae/0x940
[   31.300873]  io_sq_thread+0x207/0x940
[   31.301606]  ? do_wait_intr_irq+0xc0/0xc0
[   31.302396]  ? __ia32_sys_io_uring_enter+0x650/0x650
[   31.303321]  kthread+0x134/0x180
[   31.303982]  ? kthread_create_worker_on_cpu+0x90/0x90
[   31.304886]  ret_from_fork+0x1f/0x30

This is caused by NULL files when async cancel close, which has
IO_WQ_WORK_NO_CANCEL set and continue to do work. Fix it by also setting
needs_files for IORING_OP_ASYNC_CANCEL.

Reported-by: Abaci <abaci@linux.alibaba.com>
Cc: stable@vger.kernel.org # 5.6+
Signed-off-by: Joseph Qi <joseph.qi@linux.alibaba.com>
---
 fs/io_uring.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/io_uring.c b/fs/io_uring.c
index 985a9e3..8eb1349 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -883,7 +883,10 @@ struct io_op_def {
 		.pollin			= 1,
 		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_FILES,
 	},
-	[IORING_OP_ASYNC_CANCEL] = {},
+	[IORING_OP_ASYNC_CANCEL] = {
+		/* for async cancel close */
+		.needs_file		= 1,
+	},
 	[IORING_OP_LINK_TIMEOUT] = {
 		.needs_async_data	= 1,
 		.async_size		= sizeof(struct io_timeout_data),
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-18  9:50 [PATCH] io_uring: fix NULL pointer dereference for async cancel close Joseph Qi
@ 2021-01-18 12:23 ` Pavel Begunkov
  2021-01-18 15:08   ` Pavel Begunkov
  2021-01-19  2:08   ` Joseph Qi
  2021-01-19 18:01 ` Jens Axboe
  1 sibling, 2 replies; 12+ messages in thread
From: Pavel Begunkov @ 2021-01-18 12:23 UTC (permalink / raw)
  To: Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang

On 18/01/2021 09:50, Joseph Qi wrote:
> Abaci reported the following crash:
> 
> [   31.252589] BUG: kernel NULL pointer dereference, address: 00000000000000d8
> [   31.253942] #PF: supervisor read access in kernel mode
> [   31.254945] #PF: error_code(0x0000) - not-present page
> [   31.255964] PGD 800000010b76f067 P4D 800000010b76f067 PUD 10b462067 PMD 0
> [   31.257221] Oops: 0000 [#1] SMP PTI
> [   31.257923] CPU: 1 PID: 1788 Comm: io_uring-sq Not tainted 5.11.0-rc4 #1
> [   31.259175] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
> [   31.260232] RIP: 0010:__lock_acquire+0x19d/0x18c0
> [   31.261144] Code: 00 00 8b 1d fd 56 dd 08 85 db 0f 85 43 05 00 00 48 c7 c6 98 7b 95 82 48 c7 c7 57 96 93 82 e8 9a bc f5 ff 0f 0b e9 2b 05 00 00 <48> 81 3f c0 ca 67 8a b8 00 00 00 00 41 0f 45 c0 89 04 24 e9 81 fe
> [   31.264297] RSP: 0018:ffffc90001933828 EFLAGS: 00010002
> [   31.265320] RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
> [   31.266594] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000000d8
> [   31.267922] RBP: 0000000000000246 R08: 0000000000000001 R09: 0000000000000000
> [   31.269262] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
> [   31.270550] R13: 0000000000000000 R14: ffff888106e8a140 R15: 00000000000000d8
> [   31.271760] FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
> [   31.273269] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   31.274330] CR2: 00000000000000d8 CR3: 0000000106efa004 CR4: 00000000003706e0
> [   31.275613] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [   31.276855] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [   31.278065] Call Trace:
> [   31.278649]  lock_acquire+0x31a/0x440
> [   31.279404]  ? close_fd_get_file+0x39/0x160
> [   31.280276]  ? __lock_acquire+0x647/0x18c0
> [   31.281112]  _raw_spin_lock+0x2c/0x40
> [   31.281821]  ? close_fd_get_file+0x39/0x160
> [   31.282586]  close_fd_get_file+0x39/0x160
> [   31.283338]  io_issue_sqe+0x1334/0x14e0
> [   31.284053]  ? lock_acquire+0x31a/0x440
> [   31.284763]  ? __io_free_req+0xcf/0x2e0
> [   31.285504]  ? __io_free_req+0x175/0x2e0
> [   31.286247]  ? find_held_lock+0x28/0xb0
> [   31.286968]  ? io_wq_submit_work+0x7f/0x240
> [   31.287733]  io_wq_submit_work+0x7f/0x240
> [   31.288486]  io_wq_cancel_cb+0x161/0x580
> [   31.289230]  ? io_wqe_wake_worker+0x114/0x360
> [   31.290020]  ? io_uring_get_socket+0x40/0x40
> [   31.290832]  io_async_find_and_cancel+0x3b/0x140
> [   31.291676]  io_issue_sqe+0xbe1/0x14e0
> [   31.292405]  ? __lock_acquire+0x647/0x18c0
> [   31.293207]  ? __io_queue_sqe+0x10b/0x5f0
> [   31.293986]  __io_queue_sqe+0x10b/0x5f0
> [   31.294747]  ? io_req_prep+0xdb/0x1150
> [   31.295485]  ? mark_held_locks+0x6d/0xb0
> [   31.296252]  ? mark_held_locks+0x6d/0xb0
> [   31.297019]  ? io_queue_sqe+0x235/0x4b0
> [   31.297774]  io_queue_sqe+0x235/0x4b0
> [   31.298496]  io_submit_sqes+0xd7e/0x12a0
> [   31.299275]  ? _raw_spin_unlock_irq+0x24/0x30
> [   31.300121]  ? io_sq_thread+0x3ae/0x940
> [   31.300873]  io_sq_thread+0x207/0x940
> [   31.301606]  ? do_wait_intr_irq+0xc0/0xc0
> [   31.302396]  ? __ia32_sys_io_uring_enter+0x650/0x650
> [   31.303321]  kthread+0x134/0x180
> [   31.303982]  ? kthread_create_worker_on_cpu+0x90/0x90
> [   31.304886]  ret_from_fork+0x1f/0x30
> 
> This is caused by NULL files when async cancel close, which has
> IO_WQ_WORK_NO_CANCEL set and continue to do work. Fix it by also setting
> needs_files for IORING_OP_ASYNC_CANCEL.

Looks good enough for a quick late-rc fix,
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>

But we need need to rework this NO_CANCEL case for the future. A
reproducer would help much, do you have one? Or even better a liburing
test?

> 
> Reported-by: Abaci <abaci@linux.alibaba.com>
> Cc: stable@vger.kernel.org # 5.6+
> Signed-off-by: Joseph Qi <joseph.qi@linux.alibaba.com>
> ---
>  fs/io_uring.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index 985a9e3..8eb1349 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -883,7 +883,10 @@ struct io_op_def {
>  		.pollin			= 1,
>  		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_FILES,
>  	},
> -	[IORING_OP_ASYNC_CANCEL] = {},
> +	[IORING_OP_ASYNC_CANCEL] = {
> +		/* for async cancel close */
> +		.needs_file		= 1,
> +	},
>  	[IORING_OP_LINK_TIMEOUT] = {
>  		.needs_async_data	= 1,
>  		.async_size		= sizeof(struct io_timeout_data),
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-18 12:23 ` Pavel Begunkov
@ 2021-01-18 15:08   ` Pavel Begunkov
  2021-01-19  1:58     ` Joseph Qi
  2021-01-19  2:08   ` Joseph Qi
  1 sibling, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2021-01-18 15:08 UTC (permalink / raw)
  To: Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang

On 18/01/2021 12:23, Pavel Begunkov wrote:
> On 18/01/2021 09:50, Joseph Qi wrote:
>> Abaci reported the following crash:
>>
>> [   31.252589] BUG: kernel NULL pointer dereference, address: 00000000000000d8
>> [   31.253942] #PF: supervisor read access in kernel mode
>> [   31.254945] #PF: error_code(0x0000) - not-present page
>> [   31.255964] PGD 800000010b76f067 P4D 800000010b76f067 PUD 10b462067 PMD 0
>> [   31.257221] Oops: 0000 [#1] SMP PTI
>> [   31.257923] CPU: 1 PID: 1788 Comm: io_uring-sq Not tainted 5.11.0-rc4 #1
>> [   31.259175] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
>> [   31.260232] RIP: 0010:__lock_acquire+0x19d/0x18c0
>> [   31.261144] Code: 00 00 8b 1d fd 56 dd 08 85 db 0f 85 43 05 00 00 48 c7 c6 98 7b 95 82 48 c7 c7 57 96 93 82 e8 9a bc f5 ff 0f 0b e9 2b 05 00 00 <48> 81 3f c0 ca 67 8a b8 00 00 00 00 41 0f 45 c0 89 04 24 e9 81 fe
>> [   31.264297] RSP: 0018:ffffc90001933828 EFLAGS: 00010002
>> [   31.265320] RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
>> [   31.266594] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000000d8
>> [   31.267922] RBP: 0000000000000246 R08: 0000000000000001 R09: 0000000000000000
>> [   31.269262] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
>> [   31.270550] R13: 0000000000000000 R14: ffff888106e8a140 R15: 00000000000000d8
>> [   31.271760] FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
>> [   31.273269] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [   31.274330] CR2: 00000000000000d8 CR3: 0000000106efa004 CR4: 00000000003706e0
>> [   31.275613] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> [   31.276855] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>> [   31.278065] Call Trace:
>> [   31.278649]  lock_acquire+0x31a/0x440
>> [   31.279404]  ? close_fd_get_file+0x39/0x160
>> [   31.280276]  ? __lock_acquire+0x647/0x18c0
>> [   31.281112]  _raw_spin_lock+0x2c/0x40
>> [   31.281821]  ? close_fd_get_file+0x39/0x160
>> [   31.282586]  close_fd_get_file+0x39/0x160
>> [   31.283338]  io_issue_sqe+0x1334/0x14e0
>> [   31.284053]  ? lock_acquire+0x31a/0x440
>> [   31.284763]  ? __io_free_req+0xcf/0x2e0
>> [   31.285504]  ? __io_free_req+0x175/0x2e0
>> [   31.286247]  ? find_held_lock+0x28/0xb0
>> [   31.286968]  ? io_wq_submit_work+0x7f/0x240
>> [   31.287733]  io_wq_submit_work+0x7f/0x240
>> [   31.288486]  io_wq_cancel_cb+0x161/0x580
>> [   31.289230]  ? io_wqe_wake_worker+0x114/0x360
>> [   31.290020]  ? io_uring_get_socket+0x40/0x40
>> [   31.290832]  io_async_find_and_cancel+0x3b/0x140
>> [   31.291676]  io_issue_sqe+0xbe1/0x14e0
>> [   31.292405]  ? __lock_acquire+0x647/0x18c0
>> [   31.293207]  ? __io_queue_sqe+0x10b/0x5f0
>> [   31.293986]  __io_queue_sqe+0x10b/0x5f0
>> [   31.294747]  ? io_req_prep+0xdb/0x1150
>> [   31.295485]  ? mark_held_locks+0x6d/0xb0
>> [   31.296252]  ? mark_held_locks+0x6d/0xb0
>> [   31.297019]  ? io_queue_sqe+0x235/0x4b0
>> [   31.297774]  io_queue_sqe+0x235/0x4b0
>> [   31.298496]  io_submit_sqes+0xd7e/0x12a0
>> [   31.299275]  ? _raw_spin_unlock_irq+0x24/0x30
>> [   31.300121]  ? io_sq_thread+0x3ae/0x940
>> [   31.300873]  io_sq_thread+0x207/0x940
>> [   31.301606]  ? do_wait_intr_irq+0xc0/0xc0
>> [   31.302396]  ? __ia32_sys_io_uring_enter+0x650/0x650
>> [   31.303321]  kthread+0x134/0x180
>> [   31.303982]  ? kthread_create_worker_on_cpu+0x90/0x90
>> [   31.304886]  ret_from_fork+0x1f/0x30
>>
>> This is caused by NULL files when async cancel close, which has
>> IO_WQ_WORK_NO_CANCEL set and continue to do work. Fix it by also setting
>> needs_files for IORING_OP_ASYNC_CANCEL.
> 
> Looks good enough for a quick late-rc fix,
> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>

Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
+IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
it to grab a struct file, that is wrong.
Probably worked out because it just grabbed fd=0/stdin.

> 
> But we need need to rework this NO_CANCEL case for the future. A
> reproducer would help much, do you have one? Or even better a liburing
> test?
> 
>>
>> Reported-by: Abaci <abaci@linux.alibaba.com>
>> Cc: stable@vger.kernel.org # 5.6+
>> Signed-off-by: Joseph Qi <joseph.qi@linux.alibaba.com>
>> ---
>>  fs/io_uring.c | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index 985a9e3..8eb1349 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -883,7 +883,10 @@ struct io_op_def {
>>  		.pollin			= 1,
>>  		.work_flags		= IO_WQ_WORK_MM | IO_WQ_WORK_FILES,
>>  	},
>> -	[IORING_OP_ASYNC_CANCEL] = {},
>> +	[IORING_OP_ASYNC_CANCEL] = {
>> +		/* for async cancel close */
>> +		.needs_file		= 1,
>> +	},
>>  	[IORING_OP_LINK_TIMEOUT] = {
>>  		.needs_async_data	= 1,
>>  		.async_size		= sizeof(struct io_timeout_data),
>>
> 

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-18 15:08   ` Pavel Begunkov
@ 2021-01-19  1:58     ` Joseph Qi
  2021-01-19  2:38       ` Pavel Begunkov
  0 siblings, 1 reply; 12+ messages in thread
From: Joseph Qi @ 2021-01-19  1:58 UTC (permalink / raw)
  To: Pavel Begunkov, Jens Axboe; +Cc: io-uring, Xiaoguang Wang



On 1/18/21 11:08 PM, Pavel Begunkov wrote:
> On 18/01/2021 12:23, Pavel Begunkov wrote:
>> On 18/01/2021 09:50, Joseph Qi wrote:
>>> Abaci reported the following crash:
>>>
>>> [   31.252589] BUG: kernel NULL pointer dereference, address: 00000000000000d8
>>> [   31.253942] #PF: supervisor read access in kernel mode
>>> [   31.254945] #PF: error_code(0x0000) - not-present page
>>> [   31.255964] PGD 800000010b76f067 P4D 800000010b76f067 PUD 10b462067 PMD 0
>>> [   31.257221] Oops: 0000 [#1] SMP PTI
>>> [   31.257923] CPU: 1 PID: 1788 Comm: io_uring-sq Not tainted 5.11.0-rc4 #1
>>> [   31.259175] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
>>> [   31.260232] RIP: 0010:__lock_acquire+0x19d/0x18c0
>>> [   31.261144] Code: 00 00 8b 1d fd 56 dd 08 85 db 0f 85 43 05 00 00 48 c7 c6 98 7b 95 82 48 c7 c7 57 96 93 82 e8 9a bc f5 ff 0f 0b e9 2b 05 00 00 <48> 81 3f c0 ca 67 8a b8 00 00 00 00 41 0f 45 c0 89 04 24 e9 81 fe
>>> [   31.264297] RSP: 0018:ffffc90001933828 EFLAGS: 00010002
>>> [   31.265320] RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
>>> [   31.266594] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000000d8
>>> [   31.267922] RBP: 0000000000000246 R08: 0000000000000001 R09: 0000000000000000
>>> [   31.269262] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
>>> [   31.270550] R13: 0000000000000000 R14: ffff888106e8a140 R15: 00000000000000d8
>>> [   31.271760] FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
>>> [   31.273269] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> [   31.274330] CR2: 00000000000000d8 CR3: 0000000106efa004 CR4: 00000000003706e0
>>> [   31.275613] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>> [   31.276855] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>> [   31.278065] Call Trace:
>>> [   31.278649]  lock_acquire+0x31a/0x440
>>> [   31.279404]  ? close_fd_get_file+0x39/0x160
>>> [   31.280276]  ? __lock_acquire+0x647/0x18c0
>>> [   31.281112]  _raw_spin_lock+0x2c/0x40
>>> [   31.281821]  ? close_fd_get_file+0x39/0x160
>>> [   31.282586]  close_fd_get_file+0x39/0x160
>>> [   31.283338]  io_issue_sqe+0x1334/0x14e0
>>> [   31.284053]  ? lock_acquire+0x31a/0x440
>>> [   31.284763]  ? __io_free_req+0xcf/0x2e0
>>> [   31.285504]  ? __io_free_req+0x175/0x2e0
>>> [   31.286247]  ? find_held_lock+0x28/0xb0
>>> [   31.286968]  ? io_wq_submit_work+0x7f/0x240
>>> [   31.287733]  io_wq_submit_work+0x7f/0x240
>>> [   31.288486]  io_wq_cancel_cb+0x161/0x580
>>> [   31.289230]  ? io_wqe_wake_worker+0x114/0x360
>>> [   31.290020]  ? io_uring_get_socket+0x40/0x40
>>> [   31.290832]  io_async_find_and_cancel+0x3b/0x140
>>> [   31.291676]  io_issue_sqe+0xbe1/0x14e0
>>> [   31.292405]  ? __lock_acquire+0x647/0x18c0
>>> [   31.293207]  ? __io_queue_sqe+0x10b/0x5f0
>>> [   31.293986]  __io_queue_sqe+0x10b/0x5f0
>>> [   31.294747]  ? io_req_prep+0xdb/0x1150
>>> [   31.295485]  ? mark_held_locks+0x6d/0xb0
>>> [   31.296252]  ? mark_held_locks+0x6d/0xb0
>>> [   31.297019]  ? io_queue_sqe+0x235/0x4b0
>>> [   31.297774]  io_queue_sqe+0x235/0x4b0
>>> [   31.298496]  io_submit_sqes+0xd7e/0x12a0
>>> [   31.299275]  ? _raw_spin_unlock_irq+0x24/0x30
>>> [   31.300121]  ? io_sq_thread+0x3ae/0x940
>>> [   31.300873]  io_sq_thread+0x207/0x940
>>> [   31.301606]  ? do_wait_intr_irq+0xc0/0xc0
>>> [   31.302396]  ? __ia32_sys_io_uring_enter+0x650/0x650
>>> [   31.303321]  kthread+0x134/0x180
>>> [   31.303982]  ? kthread_create_worker_on_cpu+0x90/0x90
>>> [   31.304886]  ret_from_fork+0x1f/0x30
>>>
>>> This is caused by NULL files when async cancel close, which has
>>> IO_WQ_WORK_NO_CANCEL set and continue to do work. Fix it by also setting
>>> needs_files for IORING_OP_ASYNC_CANCEL.
>>
>> Looks good enough for a quick late-rc fix,
>> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
> 
> Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
> +IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
> it to grab a struct file, that is wrong.
> Probably worked out because it just grabbed fd=0/stdin.
> 

I think IO_WQ_WORK_FILES can work since it will acquire
files when initialize async cancel request.
Don't quite understand why we should have IO_WQ_WORK_BLKCG.

Thanks,
Joseph

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-18 12:23 ` Pavel Begunkov
  2021-01-18 15:08   ` Pavel Begunkov
@ 2021-01-19  2:08   ` Joseph Qi
  1 sibling, 0 replies; 12+ messages in thread
From: Joseph Qi @ 2021-01-19  2:08 UTC (permalink / raw)
  To: Pavel Begunkov, Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang



On 1/18/21 8:23 PM, Pavel Begunkov wrote:
> On 18/01/2021 09:50, Joseph Qi wrote:
>> Abaci reported the following crash:
>>
>> [   31.252589] BUG: kernel NULL pointer dereference, address: 00000000000000d8
>> [   31.253942] #PF: supervisor read access in kernel mode
>> [   31.254945] #PF: error_code(0x0000) - not-present page
>> [   31.255964] PGD 800000010b76f067 P4D 800000010b76f067 PUD 10b462067 PMD 0
>> [   31.257221] Oops: 0000 [#1] SMP PTI
>> [   31.257923] CPU: 1 PID: 1788 Comm: io_uring-sq Not tainted 5.11.0-rc4 #1
>> [   31.259175] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
>> [   31.260232] RIP: 0010:__lock_acquire+0x19d/0x18c0
>> [   31.261144] Code: 00 00 8b 1d fd 56 dd 08 85 db 0f 85 43 05 00 00 48 c7 c6 98 7b 95 82 48 c7 c7 57 96 93 82 e8 9a bc f5 ff 0f 0b e9 2b 05 00 00 <48> 81 3f c0 ca 67 8a b8 00 00 00 00 41 0f 45 c0 89 04 24 e9 81 fe
>> [   31.264297] RSP: 0018:ffffc90001933828 EFLAGS: 00010002
>> [   31.265320] RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
>> [   31.266594] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000000d8
>> [   31.267922] RBP: 0000000000000246 R08: 0000000000000001 R09: 0000000000000000
>> [   31.269262] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
>> [   31.270550] R13: 0000000000000000 R14: ffff888106e8a140 R15: 00000000000000d8
>> [   31.271760] FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
>> [   31.273269] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [   31.274330] CR2: 00000000000000d8 CR3: 0000000106efa004 CR4: 00000000003706e0
>> [   31.275613] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>> [   31.276855] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>> [   31.278065] Call Trace:
>> [   31.278649]  lock_acquire+0x31a/0x440
>> [   31.279404]  ? close_fd_get_file+0x39/0x160
>> [   31.280276]  ? __lock_acquire+0x647/0x18c0
>> [   31.281112]  _raw_spin_lock+0x2c/0x40
>> [   31.281821]  ? close_fd_get_file+0x39/0x160
>> [   31.282586]  close_fd_get_file+0x39/0x160
>> [   31.283338]  io_issue_sqe+0x1334/0x14e0
>> [   31.284053]  ? lock_acquire+0x31a/0x440
>> [   31.284763]  ? __io_free_req+0xcf/0x2e0
>> [   31.285504]  ? __io_free_req+0x175/0x2e0
>> [   31.286247]  ? find_held_lock+0x28/0xb0
>> [   31.286968]  ? io_wq_submit_work+0x7f/0x240
>> [   31.287733]  io_wq_submit_work+0x7f/0x240
>> [   31.288486]  io_wq_cancel_cb+0x161/0x580
>> [   31.289230]  ? io_wqe_wake_worker+0x114/0x360
>> [   31.290020]  ? io_uring_get_socket+0x40/0x40
>> [   31.290832]  io_async_find_and_cancel+0x3b/0x140
>> [   31.291676]  io_issue_sqe+0xbe1/0x14e0
>> [   31.292405]  ? __lock_acquire+0x647/0x18c0
>> [   31.293207]  ? __io_queue_sqe+0x10b/0x5f0
>> [   31.293986]  __io_queue_sqe+0x10b/0x5f0
>> [   31.294747]  ? io_req_prep+0xdb/0x1150
>> [   31.295485]  ? mark_held_locks+0x6d/0xb0
>> [   31.296252]  ? mark_held_locks+0x6d/0xb0
>> [   31.297019]  ? io_queue_sqe+0x235/0x4b0
>> [   31.297774]  io_queue_sqe+0x235/0x4b0
>> [   31.298496]  io_submit_sqes+0xd7e/0x12a0
>> [   31.299275]  ? _raw_spin_unlock_irq+0x24/0x30
>> [   31.300121]  ? io_sq_thread+0x3ae/0x940
>> [   31.300873]  io_sq_thread+0x207/0x940
>> [   31.301606]  ? do_wait_intr_irq+0xc0/0xc0
>> [   31.302396]  ? __ia32_sys_io_uring_enter+0x650/0x650
>> [   31.303321]  kthread+0x134/0x180
>> [   31.303982]  ? kthread_create_worker_on_cpu+0x90/0x90
>> [   31.304886]  ret_from_fork+0x1f/0x30
>>
>> This is caused by NULL files when async cancel close, which has
>> IO_WQ_WORK_NO_CANCEL set and continue to do work. Fix it by also setting
>> needs_files for IORING_OP_ASYNC_CANCEL.
> 
> Looks good enough for a quick late-rc fix,
> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
> 
> But we need need to rework this NO_CANCEL case for the future. A
> reproducer would help much, do you have one? Or even better a liburing
> test?
> 
Yes, it's a syzkaller producer, I'll try add a liburing test case later.

Thanks,
Joseph

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-19  1:58     ` Joseph Qi
@ 2021-01-19  2:38       ` Pavel Begunkov
  2021-01-19  8:00         ` Joseph Qi
  0 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2021-01-19  2:38 UTC (permalink / raw)
  To: Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang

On 19/01/2021 01:58, Joseph Qi wrote:
>> Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
>> +IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
>> it to grab a struct file, that is wrong.
>> Probably worked out because it just grabbed fd=0/stdin.
>>
> 
> I think IO_WQ_WORK_FILES can work since it will acquire
> files when initialize async cancel request.

That the one controlling files in the first place, need_file
just happened to grab them submission.

> Don't quite understand why we should have IO_WQ_WORK_BLKCG.

Because it's set for IORING_OP_CLOSE, and similar situation
may happen but with async_cancel from io-wq.

Actually, it's even nastier than that, and neither of io_op_def
flags would work because for io-wq case you can end up doing
close() with different from original files. I'll think how it
can be done tomorrow.

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-19  2:38       ` Pavel Begunkov
@ 2021-01-19  8:00         ` Joseph Qi
  2021-01-19 11:45           ` Pavel Begunkov
  0 siblings, 1 reply; 12+ messages in thread
From: Joseph Qi @ 2021-01-19  8:00 UTC (permalink / raw)
  To: Pavel Begunkov, Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang



On 1/19/21 10:38 AM, Pavel Begunkov wrote:
> On 19/01/2021 01:58, Joseph Qi wrote:
>>> Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
>>> +IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
>>> it to grab a struct file, that is wrong.
>>> Probably worked out because it just grabbed fd=0/stdin.
>>>
>>
>> I think IO_WQ_WORK_FILES can work since it will acquire
>> files when initialize async cancel request.
> 
> That the one controlling files in the first place, need_file
> just happened to grab them submission.
> 
>> Don't quite understand why we should have IO_WQ_WORK_BLKCG.
> 
> Because it's set for IORING_OP_CLOSE, and similar situation
> may happen but with async_cancel from io-wq.
> 
So how about do switch and restore in io_run_cancel(), seems it can
take care of direct request, sqthread and io-wq cases.

Thanks,
Joseph

> Actually, it's even nastier than that, and neither of io_op_def
> flags would work because for io-wq case you can end up doing
> close() with different from original files. I'll think how it
> can be done tomorrow.
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-19  8:00         ` Joseph Qi
@ 2021-01-19 11:45           ` Pavel Begunkov
  2021-01-19 13:12             ` Joseph Qi
  0 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2021-01-19 11:45 UTC (permalink / raw)
  To: Joseph Qi, Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang

On 19/01/2021 08:00, Joseph Qi wrote:
> 
> 
> On 1/19/21 10:38 AM, Pavel Begunkov wrote:
>> On 19/01/2021 01:58, Joseph Qi wrote:
>>>> Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
>>>> +IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
>>>> it to grab a struct file, that is wrong.
>>>> Probably worked out because it just grabbed fd=0/stdin.
>>>>
>>>
>>> I think IO_WQ_WORK_FILES can work since it will acquire
>>> files when initialize async cancel request.
>>
>> That the one controlling files in the first place, need_file
>> just happened to grab them submission.
>>
>>> Don't quite understand why we should have IO_WQ_WORK_BLKCG.
>>
>> Because it's set for IORING_OP_CLOSE, and similar situation
>> may happen but with async_cancel from io-wq.
>>
> So how about do switch and restore in io_run_cancel(), seems it can
> take care of direct request, sqthread and io-wq cases.

It will get ugly pretty quickly, + this nesting of io-wq handlers
async_handler() -> io_close() is not great...

I'm more inclined to skip them in io_wqe_cancel_pending_work()
to not execute inline. That may need to do some waiting on the
async_cancel side though to not change the semantics. Can you
try out this direction?


> 
>> Actually, it's even nastier than that, and neither of io_op_def
>> flags would work because for io-wq case you can end up doing
>> close() with different from original files. I'll think how it
>> can be done tomorrow.
>>

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-19 11:45           ` Pavel Begunkov
@ 2021-01-19 13:12             ` Joseph Qi
  2021-01-19 13:39               ` Pavel Begunkov
  0 siblings, 1 reply; 12+ messages in thread
From: Joseph Qi @ 2021-01-19 13:12 UTC (permalink / raw)
  To: Pavel Begunkov, Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang



On 1/19/21 7:45 PM, Pavel Begunkov wrote:
> On 19/01/2021 08:00, Joseph Qi wrote:
>>
>>
>> On 1/19/21 10:38 AM, Pavel Begunkov wrote:
>>> On 19/01/2021 01:58, Joseph Qi wrote:
>>>>> Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
>>>>> +IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
>>>>> it to grab a struct file, that is wrong.
>>>>> Probably worked out because it just grabbed fd=0/stdin.
>>>>>
>>>>
>>>> I think IO_WQ_WORK_FILES can work since it will acquire
>>>> files when initialize async cancel request.
>>>
>>> That the one controlling files in the first place, need_file
>>> just happened to grab them submission.
>>>
>>>> Don't quite understand why we should have IO_WQ_WORK_BLKCG.
>>>
>>> Because it's set for IORING_OP_CLOSE, and similar situation
>>> may happen but with async_cancel from io-wq.
>>>
>> So how about do switch and restore in io_run_cancel(), seems it can
>> take care of direct request, sqthread and io-wq cases.
> 
> It will get ugly pretty quickly, + this nesting of io-wq handlers
> async_handler() -> io_close() is not great...
> 
> I'm more inclined to skip them in io_wqe_cancel_pending_work()
> to not execute inline. That may need to do some waiting on the
> async_cancel side though to not change the semantics. Can you
> try out this direction?
> 
Sure, I'll try this way and send v2.

Thanks,
Joseph

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-19 13:12             ` Joseph Qi
@ 2021-01-19 13:39               ` Pavel Begunkov
  2021-01-19 16:37                 ` Pavel Begunkov
  0 siblings, 1 reply; 12+ messages in thread
From: Pavel Begunkov @ 2021-01-19 13:39 UTC (permalink / raw)
  To: Joseph Qi, Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang

On 19/01/2021 13:12, Joseph Qi wrote:
> On 1/19/21 7:45 PM, Pavel Begunkov wrote:
>> On 19/01/2021 08:00, Joseph Qi wrote:
>>> On 1/19/21 10:38 AM, Pavel Begunkov wrote:
>>>> On 19/01/2021 01:58, Joseph Qi wrote:
>>>>>> Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
>>>>>> +IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
>>>>>> it to grab a struct file, that is wrong.
>>>>>> Probably worked out because it just grabbed fd=0/stdin.
>>>>>>
>>>>>
>>>>> I think IO_WQ_WORK_FILES can work since it will acquire
>>>>> files when initialize async cancel request.
>>>>
>>>> That the one controlling files in the first place, need_file
>>>> just happened to grab them submission.
>>>>
>>>>> Don't quite understand why we should have IO_WQ_WORK_BLKCG.
>>>>
>>>> Because it's set for IORING_OP_CLOSE, and similar situation
>>>> may happen but with async_cancel from io-wq.
>>>>
>>> So how about do switch and restore in io_run_cancel(), seems it can
>>> take care of direct request, sqthread and io-wq cases.
>>
>> It will get ugly pretty quickly, + this nesting of io-wq handlers
>> async_handler() -> io_close() is not great...
>>
>> I'm more inclined to skip them in io_wqe_cancel_pending_work()
>> to not execute inline. That may need to do some waiting on the
>> async_cancel side though to not change the semantics. Can you
>> try out this direction?
>>
> Sure, I'll try this way and send v2.

There may be a much better way, that's to remove IO_WQ_WORK_NO_CANCEL
and move -EAGAIN section of io_close() before close_fd_get_file(),
so not splitting it in 2 and not keeping it half-done.

IIRC, it was done this way because of historical reasons when we
didn't put more stuff around files, but may be wrong.
Jens, do you remember what it was?

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-19 13:39               ` Pavel Begunkov
@ 2021-01-19 16:37                 ` Pavel Begunkov
  0 siblings, 0 replies; 12+ messages in thread
From: Pavel Begunkov @ 2021-01-19 16:37 UTC (permalink / raw)
  To: Joseph Qi, Joseph Qi, Jens Axboe; +Cc: io-uring, Xiaoguang Wang

On 19/01/2021 13:39, Pavel Begunkov wrote:
> On 19/01/2021 13:12, Joseph Qi wrote:
>> On 1/19/21 7:45 PM, Pavel Begunkov wrote:
>>> On 19/01/2021 08:00, Joseph Qi wrote:
>>>> On 1/19/21 10:38 AM, Pavel Begunkov wrote:
>>>>> On 19/01/2021 01:58, Joseph Qi wrote:
>>>>>>> Hmm, I hastened, for files we need IO_WQ_WORK_FILES,
>>>>>>> +IO_WQ_WORK_BLKCG for same reasons. needs_file would make 
>>>>>>> it to grab a struct file, that is wrong.
>>>>>>> Probably worked out because it just grabbed fd=0/stdin.
>>>>>>>
>>>>>>
>>>>>> I think IO_WQ_WORK_FILES can work since it will acquire
>>>>>> files when initialize async cancel request.
>>>>>
>>>>> That the one controlling files in the first place, need_file
>>>>> just happened to grab them submission.
>>>>>
>>>>>> Don't quite understand why we should have IO_WQ_WORK_BLKCG.
>>>>>
>>>>> Because it's set for IORING_OP_CLOSE, and similar situation
>>>>> may happen but with async_cancel from io-wq.
>>>>>
>>>> So how about do switch and restore in io_run_cancel(), seems it can
>>>> take care of direct request, sqthread and io-wq cases.
>>>
>>> It will get ugly pretty quickly, + this nesting of io-wq handlers
>>> async_handler() -> io_close() is not great...
>>>
>>> I'm more inclined to skip them in io_wqe_cancel_pending_work()
>>> to not execute inline. That may need to do some waiting on the
>>> async_cancel side though to not change the semantics. Can you
>>> try out this direction?
>>>
>> Sure, I'll try this way and send v2.
> 
> There may be a much better way, that's to remove IO_WQ_WORK_NO_CANCEL
> and move -EAGAIN section of io_close() before close_fd_get_file(),
> so not splitting it in 2 and not keeping it half-done.

I believe it is the right way, but there are tricks to that. I hope
you don't mind me and Jens hijacking taking care of it. Enough of
non-technical hassle expected...

Thanks for reporting it!

> 
> IIRC, it was done this way because of historical reasons when we
> didn't put more stuff around files, but may be wrong.
> Jens, do you remember what it was?

-- 
Pavel Begunkov

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] io_uring: fix NULL pointer dereference for async cancel close
  2021-01-18  9:50 [PATCH] io_uring: fix NULL pointer dereference for async cancel close Joseph Qi
  2021-01-18 12:23 ` Pavel Begunkov
@ 2021-01-19 18:01 ` Jens Axboe
  1 sibling, 0 replies; 12+ messages in thread
From: Jens Axboe @ 2021-01-19 18:01 UTC (permalink / raw)
  To: Joseph Qi; +Cc: io-uring, Xiaoguang Wang

On 1/18/21 2:50 AM, Joseph Qi wrote:
> Abaci reported the following crash:
> 
> [   31.252589] BUG: kernel NULL pointer dereference, address: 00000000000000d8
> [   31.253942] #PF: supervisor read access in kernel mode
> [   31.254945] #PF: error_code(0x0000) - not-present page
> [   31.255964] PGD 800000010b76f067 P4D 800000010b76f067 PUD 10b462067 PMD 0
> [   31.257221] Oops: 0000 [#1] SMP PTI
> [   31.257923] CPU: 1 PID: 1788 Comm: io_uring-sq Not tainted 5.11.0-rc4 #1
> [   31.259175] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
> [   31.260232] RIP: 0010:__lock_acquire+0x19d/0x18c0
> [   31.261144] Code: 00 00 8b 1d fd 56 dd 08 85 db 0f 85 43 05 00 00 48 c7 c6 98 7b 95 82 48 c7 c7 57 96 93 82 e8 9a bc f5 ff 0f 0b e9 2b 05 00 00 <48> 81 3f c0 ca 67 8a b8 00 00 00 00 41 0f 45 c0 89 04 24 e9 81 fe
> [   31.264297] RSP: 0018:ffffc90001933828 EFLAGS: 00010002
> [   31.265320] RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
> [   31.266594] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000000000d8
> [   31.267922] RBP: 0000000000000246 R08: 0000000000000001 R09: 0000000000000000
> [   31.269262] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
> [   31.270550] R13: 0000000000000000 R14: ffff888106e8a140 R15: 00000000000000d8
> [   31.271760] FS:  0000000000000000(0000) GS:ffff88813bd00000(0000) knlGS:0000000000000000
> [   31.273269] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [   31.274330] CR2: 00000000000000d8 CR3: 0000000106efa004 CR4: 00000000003706e0
> [   31.275613] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [   31.276855] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [   31.278065] Call Trace:
> [   31.278649]  lock_acquire+0x31a/0x440
> [   31.279404]  ? close_fd_get_file+0x39/0x160
> [   31.280276]  ? __lock_acquire+0x647/0x18c0
> [   31.281112]  _raw_spin_lock+0x2c/0x40
> [   31.281821]  ? close_fd_get_file+0x39/0x160
> [   31.282586]  close_fd_get_file+0x39/0x160
> [   31.283338]  io_issue_sqe+0x1334/0x14e0
> [   31.284053]  ? lock_acquire+0x31a/0x440
> [   31.284763]  ? __io_free_req+0xcf/0x2e0
> [   31.285504]  ? __io_free_req+0x175/0x2e0
> [   31.286247]  ? find_held_lock+0x28/0xb0
> [   31.286968]  ? io_wq_submit_work+0x7f/0x240
> [   31.287733]  io_wq_submit_work+0x7f/0x240
> [   31.288486]  io_wq_cancel_cb+0x161/0x580
> [   31.289230]  ? io_wqe_wake_worker+0x114/0x360
> [   31.290020]  ? io_uring_get_socket+0x40/0x40
> [   31.290832]  io_async_find_and_cancel+0x3b/0x140
> [   31.291676]  io_issue_sqe+0xbe1/0x14e0
> [   31.292405]  ? __lock_acquire+0x647/0x18c0
> [   31.293207]  ? __io_queue_sqe+0x10b/0x5f0
> [   31.293986]  __io_queue_sqe+0x10b/0x5f0
> [   31.294747]  ? io_req_prep+0xdb/0x1150
> [   31.295485]  ? mark_held_locks+0x6d/0xb0
> [   31.296252]  ? mark_held_locks+0x6d/0xb0
> [   31.297019]  ? io_queue_sqe+0x235/0x4b0
> [   31.297774]  io_queue_sqe+0x235/0x4b0
> [   31.298496]  io_submit_sqes+0xd7e/0x12a0
> [   31.299275]  ? _raw_spin_unlock_irq+0x24/0x30
> [   31.300121]  ? io_sq_thread+0x3ae/0x940
> [   31.300873]  io_sq_thread+0x207/0x940
> [   31.301606]  ? do_wait_intr_irq+0xc0/0xc0
> [   31.302396]  ? __ia32_sys_io_uring_enter+0x650/0x650
> [   31.303321]  kthread+0x134/0x180
> [   31.303982]  ? kthread_create_worker_on_cpu+0x90/0x90
> [   31.304886]  ret_from_fork+0x1f/0x30
> 
> This is caused by NULL files when async cancel close, which has
> IO_WQ_WORK_NO_CANCEL set and continue to do work. Fix it by also setting
> needs_files for IORING_OP_ASYNC_CANCEL.

I posted an alternate fix for this:

[PATCH] io_uring: fix SQPOLL IORING_OP_CLOSE cancelation state

Can you give that a spin?

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-01-20  0:23 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-18  9:50 [PATCH] io_uring: fix NULL pointer dereference for async cancel close Joseph Qi
2021-01-18 12:23 ` Pavel Begunkov
2021-01-18 15:08   ` Pavel Begunkov
2021-01-19  1:58     ` Joseph Qi
2021-01-19  2:38       ` Pavel Begunkov
2021-01-19  8:00         ` Joseph Qi
2021-01-19 11:45           ` Pavel Begunkov
2021-01-19 13:12             ` Joseph Qi
2021-01-19 13:39               ` Pavel Begunkov
2021-01-19 16:37                 ` Pavel Begunkov
2021-01-19  2:08   ` Joseph Qi
2021-01-19 18:01 ` Jens Axboe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.