From: "yukuai (C)" <yukuai3@huawei.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: <josef@toxicpanda.com>, <axboe@kernel.dk>, <hch@infradead.org>,
<linux-block@vger.kernel.org>, <nbd@other.debian.org>,
<linux-kernel@vger.kernel.org>, <yi.zhang@huawei.com>
Subject: Re: [patch v8 7/7] nbd: fix uaf in nbd_handle_reply()
Date: Thu, 16 Sep 2021 21:10:37 +0800 [thread overview]
Message-ID: <f0a72b72-19c9-f01d-806d-d27f854dea8f@huawei.com> (raw)
In-Reply-To: <YUM/cNzr6PTXFVAX@T590>
On 2021/09/16 20:58, Ming Lei wrote:
> On Thu, Sep 16, 2021 at 05:33:50PM +0800, Yu Kuai wrote:
>> There is a problem that nbd_handle_reply() might access freed request:
>>
>> 1) At first, a normal io is submitted and completed with scheduler:
>>
>> internel_tag = blk_mq_get_tag -> get tag from sched_tags
>> blk_mq_rq_ctx_init
>> sched_tags->rq[internel_tag] = sched_tag->static_rq[internel_tag]
>> ...
>> blk_mq_get_driver_tag
>> __blk_mq_get_driver_tag -> get tag from tags
>> tags->rq[tag] = sched_tag->static_rq[internel_tag]
>>
>> So, both tags->rq[tag] and sched_tags->rq[internel_tag] are pointing
>> to the request: sched_tags->static_rq[internal_tag]. Even if the
>> io is finished.
>>
>> 2) nbd server send a reply with random tag directly:
>>
>> recv_work
>> nbd_handle_reply
>> blk_mq_tag_to_rq(tags, tag)
>> rq = tags->rq[tag]
>>
>> 3) if the sched_tags->static_rq is freed:
>>
>> blk_mq_sched_free_requests
>> blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i)
>> -> step 2) access rq before clearing rq mapping
>> blk_mq_clear_rq_mapping(set, tags, hctx_idx);
>> __free_pages() -> rq is freed here
>>
>> 4) Then, nbd continue to use the freed request in nbd_handle_reply
>>
>> Fix the problem by get 'q_usage_counter' before blk_mq_tag_to_rq(),
>> thus request is ensured not to be freed because 'q_usage_counter' is
>> not zero.
>>
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>> ---
>> drivers/block/nbd.c | 14 ++++++++++++++
>> 1 file changed, 14 insertions(+)
>>
>> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
>> index 69dc5eac9ad3..b3a47fc6237f 100644
>> --- a/drivers/block/nbd.c
>> +++ b/drivers/block/nbd.c
>> @@ -825,6 +825,7 @@ static void recv_work(struct work_struct *work)
>> work);
>> struct nbd_device *nbd = args->nbd;
>> struct nbd_config *config = nbd->config;
>> + struct request_queue *q = nbd->disk->queue;
>> struct nbd_sock *nsock;
>> struct nbd_cmd *cmd;
>> struct request *rq;
>> @@ -835,7 +836,20 @@ static void recv_work(struct work_struct *work)
>> if (nbd_read_reply(nbd, args->index, &reply))
>> break;
>>
>> + /*
>> + * Grab .q_usage_counter so request pool won't go away, then no
>> + * request use-after-free is possible during nbd_handle_reply().
>> + * If queue is frozen, there won't be any inflight requests, we
>> + * needn't to handle the incoming garbage message.
>> + */
>> + if (!percpu_ref_tryget(&q->q_usage_counter)) {
>> + dev_err(disk_to_dev(nbd->disk), "%s: no io inflight\n",
>> + __func__);
>> + break;
>> + }
>> +
>> cmd = nbd_handle_reply(nbd, args->index, &reply);
>> + percpu_ref_put(&q->q_usage_counter);
>> if (IS_ERR(cmd))
>> break;
>
> The refcount needs to be grabbed when completing the request because
> the request may be completed from other code path, then the request pool
> will be freed from that code path when the request is referred.
Hi,
The request can't complete concurrently, thus put ref here is safe.
There used to be a commet here that I tried to explain it... It's fine
to me to move it behind anyway.
Thanks,
Kuai
next prev parent reply other threads:[~2021-09-16 13:10 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-16 9:33 [patch v8 0/7] handle unexpected message from server Yu Kuai
2021-09-16 9:33 ` [patch v8 1/7] nbd: don't handle response without a corresponding request message Yu Kuai
2021-09-16 9:33 ` [patch v8 2/7] nbd: make sure request completion won't concurrent Yu Kuai
2021-09-16 9:33 ` [patch v8 3/7] nbd: check sock index in nbd_read_stat() Yu Kuai
2021-09-19 10:34 ` yukuai (C)
2021-09-22 9:22 ` Ming Lei
2021-09-22 12:12 ` Eric Blake
2021-09-22 12:21 ` yukuai (C)
2021-09-22 15:56 ` Wouter Verhelst
2021-09-23 0:29 ` Ming Lei
2021-09-16 9:33 ` [patch v8 4/7] nbd: don't start request if nbd_queue_rq() failed Yu Kuai
2021-09-16 12:48 ` Ming Lei
2021-09-16 9:33 ` [patch v8 5/7] nbd: clean up return value checking of sock_xmit() Yu Kuai
2021-09-16 12:49 ` Ming Lei
2021-09-16 9:33 ` [patch v8 6/7] nbd: partition nbd_read_stat() into nbd_read_reply() and nbd_handle_reply() Yu Kuai
2021-09-16 12:53 ` Ming Lei
2021-09-16 9:33 ` [patch v8 7/7] nbd: fix uaf in nbd_handle_reply() Yu Kuai
2021-09-16 12:58 ` Ming Lei
2021-09-16 13:10 ` yukuai (C) [this message]
2021-09-16 13:55 ` Ming Lei
2021-09-16 14:05 ` yukuai (C)
2021-09-16 14:18 ` [PATCH v9] " Yu Kuai
2021-09-17 3:33 ` Ming Lei
2021-10-17 13:09 ` Jens Axboe
2021-09-23 13:33 ` [patch v8 0/7] handle unexpected message from server yukuai (C)
2021-09-29 12:54 ` yukuai (C)
2021-10-08 7:17 ` yukuai (C)
2021-10-15 8:19 ` yukuai (C)
2021-10-15 16:08 ` Josef Bacik
2021-10-17 13:09 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f0a72b72-19c9-f01d-806d-d27f854dea8f@huawei.com \
--to=yukuai3@huawei.com \
--cc=axboe@kernel.dk \
--cc=hch@infradead.org \
--cc=josef@toxicpanda.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=nbd@other.debian.org \
--cc=yi.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).