linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] SCSI: fix queue cleanup race before scsi_requeue_run_queue is done
@ 2019-08-09  9:03 zhengbin
  2019-08-09 14:43 ` Bart Van Assche
  0 siblings, 1 reply; 2+ messages in thread
From: zhengbin @ 2019-08-09  9:03 UTC (permalink / raw)
  To: jejb, martin.petersen, ming.lei, linux-scsi; +Cc: houtao1, yanaijie, zhengbin13

KASAN reports a use-after-free in 4.19-stable,
which won't happen after commit 47cdee29ef9d
("block: move blk_exit_queue into __blk_release_queue").
However, backport this patch to 4.19-stable will be a lot of work and
the risk is great. Moreover, we should make sure scsi_requeue_run_queue
is done before blk_cleanup_queue in master too.

BUG: KASAN: use-after-free in dd_has_work+0x50/0xe8
Read of size 8 at addr ffff808b57c6f168 by task kworker/53:1H/6910

CPU: 53 PID: 6910 Comm: kworker/53:1H Kdump: loaded Tainted: G
Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.59 01/31/2019
Workqueue: kblockd scsi_requeue_run_queue
Call trace:
 dump_backtrace+0x0/0x270
 show_stack+0x24/0x30
 dump_stack+0xb4/0xe4
 print_address_description+0x68/0x278
 kasan_report+0x204/0x330
 __asan_load8+0x88/0xb0
 dd_has_work+0x50/0xe8
 blk_mq_run_hw_queue+0x19c/0x218
 blk_mq_run_hw_queues+0x7c/0xb0
 scsi_run_queue+0x3ec/0x520
 scsi_requeue_run_queue+0x2c/0x38
 process_one_work+0x2e4/0x6d8
 worker_thread+0x6c/0x6a8
 kthread+0x1b4/0x1c0
 ret_from_fork+0x10/0x18

Allocated by task 46843:
 kasan_kmalloc+0xe0/0x190
 kmem_cache_alloc_node_trace+0x10c/0x258
 dd_init_queue+0x68/0x190
 blk_mq_init_sched+0x1cc/0x300
 elevator_init_mq+0x90/0xe0
 blk_mq_init_allocated_queue+0x700/0x728
 blk_mq_init_queue+0x48/0x90
 scsi_mq_alloc_queue+0x34/0xb0
 scsi_alloc_sdev+0x340/0x530
 scsi_probe_and_add_lun+0x46c/0x1260
 __scsi_scan_target+0x1b8/0x7b0
 scsi_scan_target+0x140/0x150
 fc_scsi_scan_rport+0x164/0x178 [scsi_transport_fc]
 process_one_work+0x2e4/0x6d8
 worker_thread+0x6c/0x6a8
 kthread+0x1b4/0x1c0
 ret_from_fork+0x10/0x18

Freed by task 46843:
 __kasan_slab_free+0x120/0x228
 kasan_slab_free+0x10/0x18
 kfree+0x88/0x218
 dd_exit_queue+0x5c/0x78
 blk_mq_exit_sched+0x104/0x130
 elevator_exit+0xa8/0xc8
 blk_exit_queue+0x48/0x78
 blk_cleanup_queue+0x170/0x248
 __scsi_remove_device+0x84/0x1b0
 scsi_probe_and_add_lun+0xd00/0x1260
 __scsi_scan_target+0x1b8/0x7b0
 scsi_scan_target+0x140/0x150
 fc_scsi_scan_rport+0x164/0x178 [scsi_transport_fc]
 process_one_work+0x2e4/0x6d8
 worker_thread+0x6c/0x6a8
 kthread+0x1b4/0x1c0
 ret_from_fork+0x10/0x18

Fixes: 8dc765d438f1 ("SCSI: fix queue cleanup race before queue initialization is done")
Signed-off-by: zhengbin <zhengbin13@huawei.com>
---
 drivers/scsi/scsi_lib.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 11e64b5..e5ef180 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -531,6 +531,11 @@ void scsi_requeue_run_queue(struct work_struct *work)
 	sdev = container_of(work, struct scsi_device, requeue_work);
 	q = sdev->request_queue;
 	scsi_run_queue(q);
+	/*
+	 * need to put q_usage_counter which
+	 * is got in scsi_end_request.
+	 */
+	percpu_ref_put(&q->q_usage_counter);
 }

 void scsi_run_host_queues(struct Scsi_Host *shost)
@@ -615,10 +620,11 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
 	if (scsi_target(sdev)->single_lun ||
 	    !list_empty(&sdev->host->starved_list))
 		kblockd_schedule_work(&sdev->requeue_work);
-	else
+	else {
 		blk_mq_run_hw_queues(q, true);
+		percpu_ref_put(&q->q_usage_counter);
+	}

-	percpu_ref_put(&q->q_usage_counter);
 	return false;
 }

--
2.7.4


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] SCSI: fix queue cleanup race before scsi_requeue_run_queue is done
  2019-08-09  9:03 [PATCH] SCSI: fix queue cleanup race before scsi_requeue_run_queue is done zhengbin
@ 2019-08-09 14:43 ` Bart Van Assche
  0 siblings, 0 replies; 2+ messages in thread
From: Bart Van Assche @ 2019-08-09 14:43 UTC (permalink / raw)
  To: zhengbin, jejb, martin.petersen, ming.lei, linux-scsi; +Cc: houtao1, yanaijie

On 8/9/19 2:03 AM, zhengbin wrote:
> KASAN reports a use-after-free in 4.19-stable,
> which won't happen after commit 47cdee29ef9d
> ("block: move blk_exit_queue into __blk_release_queue").
> However, backport this patch to 4.19-stable will be a lot of work and
> the risk is great. Moreover, we should make sure scsi_requeue_run_queue
> is done before blk_cleanup_queue in master too.
> 
> BUG: KASAN: use-after-free in dd_has_work+0x50/0xe8
> Read of size 8 at addr ffff808b57c6f168 by task kworker/53:1H/6910
> 
> CPU: 53 PID: 6910 Comm: kworker/53:1H Kdump: loaded Tainted: G
> Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.59 01/31/2019
> Workqueue: kblockd scsi_requeue_run_queue
> Call trace:
>  dump_backtrace+0x0/0x270
>  show_stack+0x24/0x30
>  dump_stack+0xb4/0xe4
>  print_address_description+0x68/0x278
>  kasan_report+0x204/0x330
>  __asan_load8+0x88/0xb0
>  dd_has_work+0x50/0xe8
>  blk_mq_run_hw_queue+0x19c/0x218
>  blk_mq_run_hw_queues+0x7c/0xb0
>  scsi_run_queue+0x3ec/0x520
>  scsi_requeue_run_queue+0x2c/0x38
>  process_one_work+0x2e4/0x6d8
>  worker_thread+0x6c/0x6a8
>  kthread+0x1b4/0x1c0
>  ret_from_fork+0x10/0x18
> 
> Allocated by task 46843:
>  kasan_kmalloc+0xe0/0x190
>  kmem_cache_alloc_node_trace+0x10c/0x258
>  dd_init_queue+0x68/0x190
>  blk_mq_init_sched+0x1cc/0x300
>  elevator_init_mq+0x90/0xe0
>  blk_mq_init_allocated_queue+0x700/0x728
>  blk_mq_init_queue+0x48/0x90
>  scsi_mq_alloc_queue+0x34/0xb0
>  scsi_alloc_sdev+0x340/0x530
>  scsi_probe_and_add_lun+0x46c/0x1260
>  __scsi_scan_target+0x1b8/0x7b0
>  scsi_scan_target+0x140/0x150
>  fc_scsi_scan_rport+0x164/0x178 [scsi_transport_fc]
>  process_one_work+0x2e4/0x6d8
>  worker_thread+0x6c/0x6a8
>  kthread+0x1b4/0x1c0
>  ret_from_fork+0x10/0x18
> 
> Freed by task 46843:
>  __kasan_slab_free+0x120/0x228
>  kasan_slab_free+0x10/0x18
>  kfree+0x88/0x218
>  dd_exit_queue+0x5c/0x78
>  blk_mq_exit_sched+0x104/0x130
>  elevator_exit+0xa8/0xc8
>  blk_exit_queue+0x48/0x78
>  blk_cleanup_queue+0x170/0x248
>  __scsi_remove_device+0x84/0x1b0
>  scsi_probe_and_add_lun+0xd00/0x1260
>  __scsi_scan_target+0x1b8/0x7b0
>  scsi_scan_target+0x140/0x150
>  fc_scsi_scan_rport+0x164/0x178 [scsi_transport_fc]
>  process_one_work+0x2e4/0x6d8
>  worker_thread+0x6c/0x6a8
>  kthread+0x1b4/0x1c0
>  ret_from_fork+0x10/0x18
> 
> Fixes: 8dc765d438f1 ("SCSI: fix queue cleanup race before queue initialization is done")
> Signed-off-by: zhengbin <zhengbin13@huawei.com>
> ---
>  drivers/scsi/scsi_lib.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index 11e64b5..e5ef180 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -531,6 +531,11 @@ void scsi_requeue_run_queue(struct work_struct *work)
>  	sdev = container_of(work, struct scsi_device, requeue_work);
>  	q = sdev->request_queue;
>  	scsi_run_queue(q);
> +	/*
> +	 * need to put q_usage_counter which
> +	 * is got in scsi_end_request.
> +	 */
> +	percpu_ref_put(&q->q_usage_counter);
>  }
> 
>  void scsi_run_host_queues(struct Scsi_Host *shost)
> @@ -615,10 +620,11 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
>  	if (scsi_target(sdev)->single_lun ||
>  	    !list_empty(&sdev->host->starved_list))
>  		kblockd_schedule_work(&sdev->requeue_work);
> -	else
> +	else {
>  		blk_mq_run_hw_queues(q, true);
> +		percpu_ref_put(&q->q_usage_counter);
> +	}
> 
> -	percpu_ref_put(&q->q_usage_counter);
>  	return false;
>  }

Can kblockd_schedule_work() return 0? If so, should percpu_ref_put() be
called in that case?

Bart.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-08-09 14:43 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-09  9:03 [PATCH] SCSI: fix queue cleanup race before scsi_requeue_run_queue is done zhengbin
2019-08-09 14:43 ` Bart Van Assche

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).