All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: Jianchao Wang <jianchao.w.wang@oracle.com>,
	keith.busch@intel.com, axboe@fb.com, hch@lst.de
Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
Date: Wed, 9 May 2018 18:06:46 +0300	[thread overview]
Message-ID: <7096b02a-956a-6765-7839-227c154d8336@grimberg.me> (raw)
In-Reply-To: <1525420958-9537-1-git-send-email-jianchao.w.wang@oracle.com>



On 05/04/2018 11:02 AM, Jianchao Wang wrote:
> When nvme_init_identify in nvme_rdma_configure_admin_queue fails,
> the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set.
> If nvme_rdma_stop_queue is invoked, we will incur use-after-free
> which will cause memory corruption.
>   BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm]
>   Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304
> 
>   CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G        W         4.17.0-rc3+ #20
>   Workqueue: nvme-delete-wq nvme_delete_ctrl_work
>   Call Trace:
>    dump_stack+0x91/0xeb
>    print_address_description+0x6b/0x290
>    kasan_report+0x261/0x360
>    rdma_disconnect+0x1f/0xe0 [rdma_cm]
>    nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma]
>    nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma]
>    nvme_delete_ctrl_work+0x98/0xe0
>    process_one_work+0x3ca/0xaa0
>    worker_thread+0x4e2/0x6c0
>    kthread+0x18d/0x1e0
>    ret_from_fork+0x24/0x30
> 
> To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0].
> The queue will be freed, so it certainly is not LIVE any more.
> 
> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
> ---
>   drivers/nvme/host/rdma.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index fd965d0..ffbfe82 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
>   	if (new)
>   		nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
>   out_free_queue:
> +	/*
> +	 * The queue will be freed, so it is not LIVE any more.
> +	 * This could avoid use-after-free in nvme_rdma_stop_queue.
> +	 */
> +	clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);
>   	nvme_rdma_free_queue(&ctrl->queues[0]);
>   	return error;
>   }
> 

The correct fix would be to add a tag for stop_queue and call
nvme_rdma_stop_queue() in all the failure cases after
nvme_rdma_start_queue.

WARNING: multiple messages have this Message-ID (diff)
From: sagi@grimberg.me (Sagi Grimberg)
Subject: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
Date: Wed, 9 May 2018 18:06:46 +0300	[thread overview]
Message-ID: <7096b02a-956a-6765-7839-227c154d8336@grimberg.me> (raw)
In-Reply-To: <1525420958-9537-1-git-send-email-jianchao.w.wang@oracle.com>



On 05/04/2018 11:02 AM, Jianchao Wang wrote:
> When nvme_init_identify in nvme_rdma_configure_admin_queue fails,
> the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set.
> If nvme_rdma_stop_queue is invoked, we will incur use-after-free
> which will cause memory corruption.
>   BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm]
>   Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304
> 
>   CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G        W         4.17.0-rc3+ #20
>   Workqueue: nvme-delete-wq nvme_delete_ctrl_work
>   Call Trace:
>    dump_stack+0x91/0xeb
>    print_address_description+0x6b/0x290
>    kasan_report+0x261/0x360
>    rdma_disconnect+0x1f/0xe0 [rdma_cm]
>    nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma]
>    nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma]
>    nvme_delete_ctrl_work+0x98/0xe0
>    process_one_work+0x3ca/0xaa0
>    worker_thread+0x4e2/0x6c0
>    kthread+0x18d/0x1e0
>    ret_from_fork+0x24/0x30
> 
> To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0].
> The queue will be freed, so it certainly is not LIVE any more.
> 
> Signed-off-by: Jianchao Wang <jianchao.w.wang at oracle.com>
> ---
>   drivers/nvme/host/rdma.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index fd965d0..ffbfe82 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
>   	if (new)
>   		nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
>   out_free_queue:
> +	/*
> +	 * The queue will be freed, so it is not LIVE any more.
> +	 * This could avoid use-after-free in nvme_rdma_stop_queue.
> +	 */
> +	clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);
>   	nvme_rdma_free_queue(&ctrl->queues[0]);
>   	return error;
>   }
> 

The correct fix would be to add a tag for stop_queue and call
nvme_rdma_stop_queue() in all the failure cases after
nvme_rdma_start_queue.

  parent reply	other threads:[~2018-05-09 15:06 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-04  8:02 [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue Jianchao Wang
2018-05-04  8:02 ` Jianchao Wang
2018-05-04  9:19 ` Johannes Thumshirn
2018-05-04  9:19   ` Johannes Thumshirn
2018-05-09  5:13 ` Christoph Hellwig
2018-05-09  5:13   ` Christoph Hellwig
2018-05-09 15:06 ` Sagi Grimberg [this message]
2018-05-09 15:06   ` Sagi Grimberg
2018-05-16  9:49   ` jianchao.wang
2018-05-16  9:49     ` jianchao.wang
2018-05-16 11:48     ` Max Gurtovoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7096b02a-956a-6765-7839-227c154d8336@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=axboe@fb.com \
    --cc=hch@lst.de \
    --cc=jianchao.w.wang@oracle.com \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.