linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] fix request uaf in nbd_read_stat()
@ 2021-08-08  3:17 Yu Kuai
  2021-08-08  3:17 ` [PATCH 1/2] blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock Yu Kuai
  2021-08-08  3:17 ` [PATCH 2/2] nbd: hold tags->lock to prevent access freed request through blk_mq_tag_to_rq() Yu Kuai
  0 siblings, 2 replies; 5+ messages in thread
From: Yu Kuai @ 2021-08-08  3:17 UTC (permalink / raw)
  To: axboe, josef, ming.lei; +Cc: linux-block, linux-kernel, nbd, yukuai3, yi.zhang

Yu Kuai (2):
  blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock
  nbd: hold tags->lock to prevent access freed request through
    blk_mq_tag_to_rq()

 block/blk-mq-tag.c     | 12 ++++++++++++
 drivers/block/nbd.c    | 22 ++++++++++++++++------
 include/linux/blk-mq.h |  2 ++
 3 files changed, 30 insertions(+), 6 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock
  2021-08-08  3:17 [PATCH 0/2] fix request uaf in nbd_read_stat() Yu Kuai
@ 2021-08-08  3:17 ` Yu Kuai
  2021-08-08 16:44   ` Bart Van Assche
  2021-08-08  3:17 ` [PATCH 2/2] nbd: hold tags->lock to prevent access freed request through blk_mq_tag_to_rq() Yu Kuai
  1 sibling, 1 reply; 5+ messages in thread
From: Yu Kuai @ 2021-08-08  3:17 UTC (permalink / raw)
  To: axboe, josef, ming.lei; +Cc: linux-block, linux-kernel, nbd, yukuai3, yi.zhang

Ming Lei had fixed the request UAF while iterating tags, however
some drivers is calling blk_mq_tag_to_rq() directly to get request
through tag. So the problem might still exist since that
blk_mq_tags->lock should be held.

Thus add blk_mq_tags_lock() and blk_mq_tags_unlock() so that drivers
can lock and unlock blk_mq_tags->lock if they are not sure that the
request is valid.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-mq-tag.c     | 12 ++++++++++++
 include/linux/blk-mq.h |  2 ++
 2 files changed, 14 insertions(+)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 86f87346232a..388d447c993a 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -652,3 +652,15 @@ u32 blk_mq_unique_tag(struct request *rq)
 		(rq->tag & BLK_MQ_UNIQUE_TAG_MASK);
 }
 EXPORT_SYMBOL(blk_mq_unique_tag);
+
+void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags)
+{
+	spin_lock_irqsave(&tags->lock, *flags);
+}
+EXPORT_SYMBOL(blk_mq_tags_lock);
+
+void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags)
+{
+	spin_unlock_irqrestore(&tags->lock, *flags);
+}
+EXPORT_SYMBOL(blk_mq_tags_unlock);
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 1d18447ebebc..b4bad4d6a3a8 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -635,4 +635,6 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio);
 void blk_mq_hctx_set_fq_lock_class(struct blk_mq_hw_ctx *hctx,
 		struct lock_class_key *key);
 
+void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags);
+void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags);
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] nbd: hold tags->lock to prevent access freed request through blk_mq_tag_to_rq()
  2021-08-08  3:17 [PATCH 0/2] fix request uaf in nbd_read_stat() Yu Kuai
  2021-08-08  3:17 ` [PATCH 1/2] blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock Yu Kuai
@ 2021-08-08  3:17 ` Yu Kuai
  1 sibling, 0 replies; 5+ messages in thread
From: Yu Kuai @ 2021-08-08  3:17 UTC (permalink / raw)
  To: axboe, josef, ming.lei; +Cc: linux-block, linux-kernel, nbd, yukuai3, yi.zhang

Our test reported a uaf problem:

Read of size 4 at addr ffff80036b790b54 by task kworker/u9:1/31105

Workqueue: knbd0-recv recv_work
Call trace:
 dump_backtrace+0x0/0x310 arch/arm64/kernel/time.c:78
 show_stack+0x28/0x38 arch/arm64/kernel/traps.c:158
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x144/0x1b4 lib/dump_stack.c:118
 print_address_description+0x68/0x2d0 mm/kasan/report.c:253
 kasan_report_error mm/kasan/report.c:351 [inline]
 kasan_report+0x134/0x2f0 mm/kasan/report.c:409
 check_memory_region_inline mm/kasan/kasan.c:260 [inline]
 __asan_load4+0x88/0xb0 mm/kasan/kasan.c:699
 __read_once_size include/linux/compiler.h:193 [inline]
 blk_mq_rq_state block/blk-mq.h:106 [inline]
 blk_mq_request_started+0x24/0x40 block/blk-mq.c:644
 nbd_read_stat drivers/block/nbd.c:670 [inline]
 recv_work+0x1bc/0x890 drivers/block/nbd.c:749
 process_one_work+0x3ec/0x9e0 kernel/workqueue.c:2147
 worker_thread+0x80/0x9d0 kernel/workqueue.c:2302
 kthread+0x1d8/0x1e0 kernel/kthread.c:255
 ret_from_fork+0x10/0x18 arch/arm64/kernel/entry.S:1174

This is because tags->static_rq can be freed without clearing tags->rq,
Ming Lei had fixed the problem while itering tags, howerver, the problem
still exist in blk_mq_tag_to_rq().

Thus fix the problem by holding tags->lock, so that tags->rq can be
cleared before tags->static_rq is freed.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/block/nbd.c | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index c38317979f74..c7ca16f0adbd 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -712,12 +712,22 @@ static struct nbd_cmd *nbd_read_stat(struct nbd_device *nbd, int index)
 	memcpy(&handle, reply.handle, sizeof(handle));
 	tag = nbd_handle_to_tag(handle);
 	hwq = blk_mq_unique_tag_to_hwq(tag);
-	if (hwq < nbd->tag_set.nr_hw_queues)
-		req = blk_mq_tag_to_rq(nbd->tag_set.tags[hwq],
-				       blk_mq_unique_tag_to_tag(tag));
-	if (!req || !blk_mq_request_started(req)) {
-		dev_err(disk_to_dev(nbd->disk), "Unexpected reply (%d) %p\n",
-			tag, req);
+	if (hwq < nbd->tag_set.nr_hw_queues) {
+		unsigned long flags;
+		struct blk_mq_tags *tags = nbd->tag_set.tags[hwq];
+
+		blk_mq_tags_lock(tags, &flags);
+		req = blk_mq_tag_to_rq(tags, blk_mq_unique_tag_to_tag(tag));
+		if (!blk_mq_request_started(req)) {
+			dev_err(disk_to_dev(nbd->disk), "Request not started (%d) %p\n",
+				tag, req);
+			req = NULL;
+		}
+		blk_mq_tags_unlock(tags, &flags);
+	}
+
+	if (!req) {
+		dev_err(disk_to_dev(nbd->disk), "Unexpected reply (%d)\n", tag);
 		return ERR_PTR(-ENOENT);
 	}
 	trace_nbd_header_received(req, handle);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock
  2021-08-08  3:17 ` [PATCH 1/2] blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock Yu Kuai
@ 2021-08-08 16:44   ` Bart Van Assche
  2021-08-09  1:11     ` yukuai (C)
  0 siblings, 1 reply; 5+ messages in thread
From: Bart Van Assche @ 2021-08-08 16:44 UTC (permalink / raw)
  To: Yu Kuai, axboe, josef, ming.lei; +Cc: linux-block, linux-kernel, nbd, yi.zhang

On 8/7/21 8:17 PM, Yu Kuai wrote:
> +void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags)
> +{
> +	spin_lock_irqsave(&tags->lock, *flags);
> +}
> +EXPORT_SYMBOL(blk_mq_tags_lock);
> +
> +void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags)
> +{
> +	spin_unlock_irqrestore(&tags->lock, *flags);
> +}
> +EXPORT_SYMBOL(blk_mq_tags_unlock);

The tag map lock is an implementation detail and hence this lock must
not be used directly by block drivers. I propose to introduce and export
a new function to block drivers that does the following:
* Lock tags->lock.
* Call blk_mq_tag_to_rq().
* Check whether the request is in the started state. If so, increment
its reference count.
* Unlock tags->lock.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock
  2021-08-08 16:44   ` Bart Van Assche
@ 2021-08-09  1:11     ` yukuai (C)
  0 siblings, 0 replies; 5+ messages in thread
From: yukuai (C) @ 2021-08-09  1:11 UTC (permalink / raw)
  To: Bart Van Assche, axboe, josef, ming.lei
  Cc: linux-block, linux-kernel, nbd, yi.zhang

On 2021/08/09 0:44, Bart Van Assche wrote:
> On 8/7/21 8:17 PM, Yu Kuai wrote:
>> +void blk_mq_tags_lock(struct blk_mq_tags *tags, unsigned long *flags)
>> +{
>> +	spin_lock_irqsave(&tags->lock, *flags);
>> +}
>> +EXPORT_SYMBOL(blk_mq_tags_lock);
>> +
>> +void blk_mq_tags_unlock(struct blk_mq_tags *tags, unsigned long *flags)
>> +{
>> +	spin_unlock_irqrestore(&tags->lock, *flags);
>> +}
>> +EXPORT_SYMBOL(blk_mq_tags_unlock);
> 
> The tag map lock is an implementation detail and hence this lock must
> not be used directly by block drivers. I propose to introduce and export
> a new function to block drivers that does the following:
> * Lock tags->lock.
> * Call blk_mq_tag_to_rq().
> * Check whether the request is in the started state. If so, increment
> its reference count.
> * Unlock tags->lock.

Hi, Bart

Thanks for your advice, will do this in next iteration.

Best regards
Kuai


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-08-09  1:11 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-08  3:17 [PATCH 0/2] fix request uaf in nbd_read_stat() Yu Kuai
2021-08-08  3:17 ` [PATCH 1/2] blk-mq: add two interfaces to lock/unlock blk_mq_tags->lock Yu Kuai
2021-08-08 16:44   ` Bart Van Assche
2021-08-09  1:11     ` yukuai (C)
2021-08-08  3:17 ` [PATCH 2/2] nbd: hold tags->lock to prevent access freed request through blk_mq_tag_to_rq() Yu Kuai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).