All of lore.kernel.org
 help / color / mirror / Atom feed
* [bug report]BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
@ 2022-12-19  7:16 Yi Zhang
  2022-12-19 17:52 ` Jens Axboe
  0 siblings, 1 reply; 4+ messages in thread
From: Yi Zhang @ 2022-12-19  7:16 UTC (permalink / raw)
  To: linux-block

Hello
Below issue was triggered during blktests nvme-tcp with for-next
(6.1.0, block, 2280cbf6), pls help check it

[  782.395936] run blktests nvme/013 at 2022-12-18 07:32:09
[  782.425739] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
[  782.435780] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[  782.446357] nvmet: creating nvm controller 1 for subsystem
blktests-subsystem-1 for NQN
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0042-3910-8039-c6c04f544833.
[  782.460744] nvme nvme0: creating 32 I/O queues.
[  782.466760] nvme nvme0: mapped 32/0/0 default/read/poll queues.
[  782.479615] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
127.0.0.1:4420
[  783.612793] XFS (nvme0n1): Mounting V5 Filesystem
[  783.650705] XFS (nvme0n1): Ending clean mount
[  799.653271] ==================================================================
[  799.660496] BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
[  799.669117] Use-after-free read at 0x000000008c692c21 (in kfence-#11):
[  799.675647]  bfq_exit_icq_bfqq+0x132/0x270
[  799.679753]  bfq_exit_icq+0x5b/0x80
[  799.683255]  exit_io_context+0x81/0xb0
[  799.687015]  do_exit+0x74b/0xaf0
[  799.690256]  kthread_exit+0x25/0x30
[  799.693758]  kthread+0xc8/0x110
[  799.696904]  ret_from_fork+0x1f/0x30
[  799.701991] kfence-#11: 0x00000000f1839eaa-0x0000000011c747a1,
size=568, cache=bfq_queue
[  799.711549] allocated by task 19533 on cpu 9 at 499.180335s:
[  799.717218]  bfq_get_queue+0xe0/0x530
[  799.720884]  bfq_get_bfqq_handle_split+0x75/0x120
[  799.725592]  bfq_insert_requests+0x1d15/0x2710
[  799.730045]  blk_mq_sched_insert_requests+0x5c/0x170
[  799.735021]  blk_mq_flush_plug_list+0x115/0x2e0
[  799.739551]  __blk_flush_plug+0xf2/0x130
[  799.743479]  blk_finish_plug+0x25/0x40
[  799.747231]  __iomap_dio_rw+0x520/0x7b0
[  799.751070]  btrfs_dio_write+0x42/0x50
[  799.754832]  btrfs_do_write_iter+0x2f4/0x5d0
[  799.759112]  nvmet_file_submit_bvec+0xa6/0xe0 [nvmet]
[  799.764193]  nvmet_file_execute_io+0x1a4/0x250 [nvmet]
[  799.769349]  process_one_work+0x1c4/0x380
[  799.773361]  worker_thread+0x4d/0x380
[  799.777028]  kthread+0xe6/0x110
[  799.780174]  ret_from_fork+0x1f/0x30
[  799.785252] freed by task 19533 on cpu 9 at 799.653250s:
[  799.790584]  bfq_put_queue+0x183/0x2c0
[  799.794344]  bfq_exit_icq_bfqq+0x129/0x270
[  799.798442]  bfq_exit_icq+0x5b/0x80
[  799.801934]  exit_io_context+0x81/0xb0
[  799.805687]  do_exit+0x74b/0xaf0
[  799.808920]  kthread_exit+0x25/0x30
[  799.812413]  kthread+0xc8/0x110
[  799.815561]  ret_from_fork+0x1f/0x30
[  799.820648] CPU: 9 PID: 19533 Comm: kworker/dying Not tainted 6.1.0 #1
[  799.827181] Hardware name: Dell Inc. PowerEdge R640/0X45NX, BIOS
2.15.1 06/15/2022
[  799.834746] ==================================================================
[  823.081364] XFS (nvme0n1): Unmounting Filesystem
[  823.159994] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"


--
Best Regards,
  Yi Zhang


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bug report]BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
  2022-12-19  7:16 [bug report]BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270 Yi Zhang
@ 2022-12-19 17:52 ` Jens Axboe
  2022-12-22 14:49   ` Jan Kara
  0 siblings, 1 reply; 4+ messages in thread
From: Jens Axboe @ 2022-12-19 17:52 UTC (permalink / raw)
  To: Yi Zhang, linux-block; +Cc: Paolo Valente, Jan Kara

On 12/19/22 12:16 AM, Yi Zhang wrote:
> Hello
> Below issue was triggered during blktests nvme-tcp with for-next
> (6.1.0, block, 2280cbf6), pls help check it
> 
> [  782.395936] run blktests nvme/013 at 2022-12-18 07:32:09
> [  782.425739] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> [  782.435780] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> [  782.446357] nvmet: creating nvm controller 1 for subsystem
> blktests-subsystem-1 for NQN
> nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0042-3910-8039-c6c04f544833.
> [  782.460744] nvme nvme0: creating 32 I/O queues.
> [  782.466760] nvme nvme0: mapped 32/0/0 default/read/poll queues.
> [  782.479615] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
> 127.0.0.1:4420
> [  783.612793] XFS (nvme0n1): Mounting V5 Filesystem
> [  783.650705] XFS (nvme0n1): Ending clean mount
> [  799.653271] ==================================================================
> [  799.660496] BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
> [  799.669117] Use-after-free read at 0x000000008c692c21 (in kfence-#11):
> [  799.675647]  bfq_exit_icq_bfqq+0x132/0x270
> [  799.679753]  bfq_exit_icq+0x5b/0x80
> [  799.683255]  exit_io_context+0x81/0xb0
> [  799.687015]  do_exit+0x74b/0xaf0
> [  799.690256]  kthread_exit+0x25/0x30
> [  799.693758]  kthread+0xc8/0x110
> [  799.696904]  ret_from_fork+0x1f/0x30
> [  799.701991] kfence-#11: 0x00000000f1839eaa-0x0000000011c747a1,
> size=568, cache=bfq_queue
> [  799.711549] allocated by task 19533 on cpu 9 at 499.180335s:
> [  799.717218]  bfq_get_queue+0xe0/0x530
> [  799.720884]  bfq_get_bfqq_handle_split+0x75/0x120
> [  799.725592]  bfq_insert_requests+0x1d15/0x2710
> [  799.730045]  blk_mq_sched_insert_requests+0x5c/0x170
> [  799.735021]  blk_mq_flush_plug_list+0x115/0x2e0
> [  799.739551]  __blk_flush_plug+0xf2/0x130
> [  799.743479]  blk_finish_plug+0x25/0x40
> [  799.747231]  __iomap_dio_rw+0x520/0x7b0
> [  799.751070]  btrfs_dio_write+0x42/0x50
> [  799.754832]  btrfs_do_write_iter+0x2f4/0x5d0
> [  799.759112]  nvmet_file_submit_bvec+0xa6/0xe0 [nvmet]
> [  799.764193]  nvmet_file_execute_io+0x1a4/0x250 [nvmet]
> [  799.769349]  process_one_work+0x1c4/0x380
> [  799.773361]  worker_thread+0x4d/0x380
> [  799.777028]  kthread+0xe6/0x110
> [  799.780174]  ret_from_fork+0x1f/0x30
> [  799.785252] freed by task 19533 on cpu 9 at 799.653250s:
> [  799.790584]  bfq_put_queue+0x183/0x2c0
> [  799.794344]  bfq_exit_icq_bfqq+0x129/0x270
> [  799.798442]  bfq_exit_icq+0x5b/0x80
> [  799.801934]  exit_io_context+0x81/0xb0
> [  799.805687]  do_exit+0x74b/0xaf0
> [  799.808920]  kthread_exit+0x25/0x30
> [  799.812413]  kthread+0xc8/0x110
> [  799.815561]  ret_from_fork+0x1f/0x30
> [  799.820648] CPU: 9 PID: 19533 Comm: kworker/dying Not tainted 6.1.0 #1
> [  799.827181] Hardware name: Dell Inc. PowerEdge R640/0X45NX, BIOS
> 2.15.1 06/15/2022
> [  799.834746] ==================================================================
> [  823.081364] XFS (nvme0n1): Unmounting Filesystem
> [  823.159994] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"

Please CC maintainers on stuff like this (added).

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bug report]BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
  2022-12-19 17:52 ` Jens Axboe
@ 2022-12-22 14:49   ` Jan Kara
  2022-12-26  1:17     ` Yu Kuai
  0 siblings, 1 reply; 4+ messages in thread
From: Jan Kara @ 2022-12-22 14:49 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Yi Zhang, linux-block, Paolo Valente, Jan Kara, yukuai3

Thanks for report and the CC!

On Mon 19-12-22 10:52:57, Jens Axboe wrote:
> On 12/19/22 12:16 AM, Yi Zhang wrote:
> > Below issue was triggered during blktests nvme-tcp with for-next
> > (6.1.0, block, 2280cbf6), pls help check it
> > 
> > [  782.395936] run blktests nvme/013 at 2022-12-18 07:32:09
> > [  782.425739] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
> > [  782.435780] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
> > [  782.446357] nvmet: creating nvm controller 1 for subsystem
> > blktests-subsystem-1 for NQN
> > nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0042-3910-8039-c6c04f544833.
> > [  782.460744] nvme nvme0: creating 32 I/O queues.
> > [  782.466760] nvme nvme0: mapped 32/0/0 default/read/poll queues.
> > [  782.479615] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
> > 127.0.0.1:4420
> > [  783.612793] XFS (nvme0n1): Mounting V5 Filesystem
> > [  783.650705] XFS (nvme0n1): Ending clean mount
> > [  799.653271] ==================================================================
> > [  799.660496] BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
> > [  799.669117] Use-after-free read at 0x000000008c692c21 (in kfence-#11):
> > [  799.675647]  bfq_exit_icq_bfqq+0x132/0x270
> > [  799.679753]  bfq_exit_icq+0x5b/0x80
> > [  799.683255]  exit_io_context+0x81/0xb0
> > [  799.687015]  do_exit+0x74b/0xaf0
> > [  799.690256]  kthread_exit+0x25/0x30
> > [  799.693758]  kthread+0xc8/0x110
> > [  799.696904]  ret_from_fork+0x1f/0x30
> > [  799.701991] kfence-#11: 0x00000000f1839eaa-0x0000000011c747a1,
> > size=568, cache=bfq_queue
> > [  799.711549] allocated by task 19533 on cpu 9 at 499.180335s:
> > [  799.717218]  bfq_get_queue+0xe0/0x530
> > [  799.720884]  bfq_get_bfqq_handle_split+0x75/0x120
> > [  799.725592]  bfq_insert_requests+0x1d15/0x2710
> > [  799.730045]  blk_mq_sched_insert_requests+0x5c/0x170
> > [  799.735021]  blk_mq_flush_plug_list+0x115/0x2e0
> > [  799.739551]  __blk_flush_plug+0xf2/0x130
> > [  799.743479]  blk_finish_plug+0x25/0x40
> > [  799.747231]  __iomap_dio_rw+0x520/0x7b0
> > [  799.751070]  btrfs_dio_write+0x42/0x50
> > [  799.754832]  btrfs_do_write_iter+0x2f4/0x5d0
> > [  799.759112]  nvmet_file_submit_bvec+0xa6/0xe0 [nvmet]
> > [  799.764193]  nvmet_file_execute_io+0x1a4/0x250 [nvmet]
> > [  799.769349]  process_one_work+0x1c4/0x380
> > [  799.773361]  worker_thread+0x4d/0x380
> > [  799.777028]  kthread+0xe6/0x110
> > [  799.780174]  ret_from_fork+0x1f/0x30
> > [  799.785252] freed by task 19533 on cpu 9 at 799.653250s:
> > [  799.790584]  bfq_put_queue+0x183/0x2c0
> > [  799.794344]  bfq_exit_icq_bfqq+0x129/0x270
> > [  799.798442]  bfq_exit_icq+0x5b/0x80
> > [  799.801934]  exit_io_context+0x81/0xb0
> > [  799.805687]  do_exit+0x74b/0xaf0
> > [  799.808920]  kthread_exit+0x25/0x30
> > [  799.812413]  kthread+0xc8/0x110
> > [  799.815561]  ret_from_fork+0x1f/0x30
> > [  799.820648] CPU: 9 PID: 19533 Comm: kworker/dying Not tainted 6.1.0 #1
> > [  799.827181] Hardware name: Dell Inc. PowerEdge R640/0X45NX, BIOS
> > 2.15.1 06/15/2022
> > [  799.834746] ==================================================================
> > [  823.081364] XFS (nvme0n1): Unmounting Filesystem
> > [  823.159994] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"

Can you use addr2line to find which dereference is exactly causing the
problem? Hum, it seems to point to some strange issue because we've just
freed bfqq in this exit_io_context() invocation and seeing you are testing
linux-block tree I think the problem might be caused by 64dc8c732f5c
("block, bfq: fix possible uaf for 'bfqq->bic'"). Kuai, I think we've
messed up bfq_exit_icq_bfqq() and now bic_set_bfqq() can access already
freed 'old_bfqq'. So we need something like:


                spin_lock_irqsave(&bfqd->lock, flags);
                bfqq->bic = NULL;
-               bfq_exit_bfqq(bfqd, bfqq);
                bic_set_bfqq(bic, NULL, is_sync);
+               bfq_exit_bfqq(bfqd, bfqq);
                spin_unlock_irqrestore(&bfqd->lock, flags);

so free bfqq only after it is removed from the bic...

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [bug report]BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
  2022-12-22 14:49   ` Jan Kara
@ 2022-12-26  1:17     ` Yu Kuai
  0 siblings, 0 replies; 4+ messages in thread
From: Yu Kuai @ 2022-12-26  1:17 UTC (permalink / raw)
  To: Jan Kara, Jens Axboe; +Cc: Yi Zhang, linux-block, Paolo Valente, yukuai (C)

Hi, Jan!

在 2022/12/22 22:49, Jan Kara 写道:
> Thanks for report and the CC!
> 
> On Mon 19-12-22 10:52:57, Jens Axboe wrote:
>> On 12/19/22 12:16 AM, Yi Zhang wrote:
>>> Below issue was triggered during blktests nvme-tcp with for-next
>>> (6.1.0, block, 2280cbf6), pls help check it
>>>
>>> [  782.395936] run blktests nvme/013 at 2022-12-18 07:32:09
>>> [  782.425739] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
>>> [  782.435780] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
>>> [  782.446357] nvmet: creating nvm controller 1 for subsystem
>>> blktests-subsystem-1 for NQN
>>> nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0042-3910-8039-c6c04f544833.
>>> [  782.460744] nvme nvme0: creating 32 I/O queues.
>>> [  782.466760] nvme nvme0: mapped 32/0/0 default/read/poll queues.
>>> [  782.479615] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
>>> 127.0.0.1:4420
>>> [  783.612793] XFS (nvme0n1): Mounting V5 Filesystem
>>> [  783.650705] XFS (nvme0n1): Ending clean mount
>>> [  799.653271] ==================================================================
>>> [  799.660496] BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270
>>> [  799.669117] Use-after-free read at 0x000000008c692c21 (in kfence-#11):
>>> [  799.675647]  bfq_exit_icq_bfqq+0x132/0x270
>>> [  799.679753]  bfq_exit_icq+0x5b/0x80
>>> [  799.683255]  exit_io_context+0x81/0xb0
>>> [  799.687015]  do_exit+0x74b/0xaf0
>>> [  799.690256]  kthread_exit+0x25/0x30
>>> [  799.693758]  kthread+0xc8/0x110
>>> [  799.696904]  ret_from_fork+0x1f/0x30
>>> [  799.701991] kfence-#11: 0x00000000f1839eaa-0x0000000011c747a1,
>>> size=568, cache=bfq_queue
>>> [  799.711549] allocated by task 19533 on cpu 9 at 499.180335s:
>>> [  799.717218]  bfq_get_queue+0xe0/0x530
>>> [  799.720884]  bfq_get_bfqq_handle_split+0x75/0x120
>>> [  799.725592]  bfq_insert_requests+0x1d15/0x2710
>>> [  799.730045]  blk_mq_sched_insert_requests+0x5c/0x170
>>> [  799.735021]  blk_mq_flush_plug_list+0x115/0x2e0
>>> [  799.739551]  __blk_flush_plug+0xf2/0x130
>>> [  799.743479]  blk_finish_plug+0x25/0x40
>>> [  799.747231]  __iomap_dio_rw+0x520/0x7b0
>>> [  799.751070]  btrfs_dio_write+0x42/0x50
>>> [  799.754832]  btrfs_do_write_iter+0x2f4/0x5d0
>>> [  799.759112]  nvmet_file_submit_bvec+0xa6/0xe0 [nvmet]
>>> [  799.764193]  nvmet_file_execute_io+0x1a4/0x250 [nvmet]
>>> [  799.769349]  process_one_work+0x1c4/0x380
>>> [  799.773361]  worker_thread+0x4d/0x380
>>> [  799.777028]  kthread+0xe6/0x110
>>> [  799.780174]  ret_from_fork+0x1f/0x30
>>> [  799.785252] freed by task 19533 on cpu 9 at 799.653250s:
>>> [  799.790584]  bfq_put_queue+0x183/0x2c0
>>> [  799.794344]  bfq_exit_icq_bfqq+0x129/0x270
>>> [  799.798442]  bfq_exit_icq+0x5b/0x80
>>> [  799.801934]  exit_io_context+0x81/0xb0
>>> [  799.805687]  do_exit+0x74b/0xaf0
>>> [  799.808920]  kthread_exit+0x25/0x30
>>> [  799.812413]  kthread+0xc8/0x110
>>> [  799.815561]  ret_from_fork+0x1f/0x30
>>> [  799.820648] CPU: 9 PID: 19533 Comm: kworker/dying Not tainted 6.1.0 #1
>>> [  799.827181] Hardware name: Dell Inc. PowerEdge R640/0X45NX, BIOS
>>> 2.15.1 06/15/2022
>>> [  799.834746] ==================================================================
>>> [  823.081364] XFS (nvme0n1): Unmounting Filesystem
>>> [  823.159994] nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1"
> 
> Can you use addr2line to find which dereference is exactly causing the
> problem? Hum, it seems to point to some strange issue because we've just
> freed bfqq in this exit_io_context() invocation and seeing you are testing
> linux-block tree I think the problem might be caused by 64dc8c732f5c
> ("block, bfq: fix possible uaf for 'bfqq->bic'"). Kuai, I think we've
> messed up bfq_exit_icq_bfqq() and now bic_set_bfqq() can access already
> freed 'old_bfqq'. So we need something like:
> 
> 
>                  spin_lock_irqsave(&bfqd->lock, flags);
>                  bfqq->bic = NULL;
> -               bfq_exit_bfqq(bfqd, bfqq);
>                  bic_set_bfqq(bic, NULL, is_sync);
> +               bfq_exit_bfqq(bfqd, bfqq);
>                  spin_unlock_irqrestore(&bfqd->lock, flags);
> 
> so free bfqq only after it is removed from the bic...

Sorry for the delay, and you're right, that's definitely a problem. 😒

Thanks,
Kuai
> 
> 								Honza
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-12-26  1:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-19  7:16 [bug report]BUG: KFENCE: use-after-free read in bfq_exit_icq_bfqq+0x132/0x270 Yi Zhang
2022-12-19 17:52 ` Jens Axboe
2022-12-22 14:49   ` Jan Kara
2022-12-26  1:17     ` Yu Kuai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.