All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
@ 2018-05-04  8:02 ` Jianchao Wang
  0 siblings, 0 replies; 11+ messages in thread
From: Jianchao Wang @ 2018-05-04  8:02 UTC (permalink / raw)
  To: keith.busch, axboe, hch, sagi; +Cc: linux-nvme, linux-kernel

When nvme_init_identify in nvme_rdma_configure_admin_queue fails,
the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set.
If nvme_rdma_stop_queue is invoked, we will incur use-after-free
which will cause memory corruption.
 BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm]
 Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304

 CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G        W         4.17.0-rc3+ #20
 Workqueue: nvme-delete-wq nvme_delete_ctrl_work
 Call Trace:
  dump_stack+0x91/0xeb
  print_address_description+0x6b/0x290
  kasan_report+0x261/0x360
  rdma_disconnect+0x1f/0xe0 [rdma_cm]
  nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma]
  nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma]
  nvme_delete_ctrl_work+0x98/0xe0
  process_one_work+0x3ca/0xaa0
  worker_thread+0x4e2/0x6c0
  kthread+0x18d/0x1e0
  ret_from_fork+0x24/0x30

To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0].
The queue will be freed, so it certainly is not LIVE any more.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
---
 drivers/nvme/host/rdma.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index fd965d0..ffbfe82 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
 	if (new)
 		nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
 out_free_queue:
+	/*
+	 * The queue will be freed, so it is not LIVE any more.
+	 * This could avoid use-after-free in nvme_rdma_stop_queue.
+	 */
+	clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);
 	nvme_rdma_free_queue(&ctrl->queues[0]);
 	return error;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
@ 2018-05-04  8:02 ` Jianchao Wang
  0 siblings, 0 replies; 11+ messages in thread
From: Jianchao Wang @ 2018-05-04  8:02 UTC (permalink / raw)


When nvme_init_identify in nvme_rdma_configure_admin_queue fails,
the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set.
If nvme_rdma_stop_queue is invoked, we will incur use-after-free
which will cause memory corruption.
 BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm]
 Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304

 CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G        W         4.17.0-rc3+ #20
 Workqueue: nvme-delete-wq nvme_delete_ctrl_work
 Call Trace:
  dump_stack+0x91/0xeb
  print_address_description+0x6b/0x290
  kasan_report+0x261/0x360
  rdma_disconnect+0x1f/0xe0 [rdma_cm]
  nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma]
  nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma]
  nvme_delete_ctrl_work+0x98/0xe0
  process_one_work+0x3ca/0xaa0
  worker_thread+0x4e2/0x6c0
  kthread+0x18d/0x1e0
  ret_from_fork+0x24/0x30

To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0].
The queue will be freed, so it certainly is not LIVE any more.

Signed-off-by: Jianchao Wang <jianchao.w.wang at oracle.com>
---
 drivers/nvme/host/rdma.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index fd965d0..ffbfe82 100644
--- a/drivers/nvme/host/rdma.c
+++ b/drivers/nvme/host/rdma.c
@@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
 	if (new)
 		nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
 out_free_queue:
+	/*
+	 * The queue will be freed, so it is not LIVE any more.
+	 * This could avoid use-after-free in nvme_rdma_stop_queue.
+	 */
+	clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);
 	nvme_rdma_free_queue(&ctrl->queues[0]);
 	return error;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
  2018-05-04  8:02 ` Jianchao Wang
@ 2018-05-04  9:19   ` Johannes Thumshirn
  -1 siblings, 0 replies; 11+ messages in thread
From: Johannes Thumshirn @ 2018-05-04  9:19 UTC (permalink / raw)
  To: Jianchao Wang; +Cc: keith.busch, axboe, hch, sagi, linux-kernel, linux-nvme

Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
@ 2018-05-04  9:19   ` Johannes Thumshirn
  0 siblings, 0 replies; 11+ messages in thread
From: Johannes Thumshirn @ 2018-05-04  9:19 UTC (permalink / raw)


Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de>
-- 
Johannes Thumshirn                                          Storage
jthumshirn at suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
  2018-05-04  8:02 ` Jianchao Wang
@ 2018-05-09  5:13   ` Christoph Hellwig
  -1 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-09  5:13 UTC (permalink / raw)
  To: Jianchao Wang; +Cc: keith.busch, axboe, hch, sagi, linux-kernel, linux-nvme

Looks fine,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
@ 2018-05-09  5:13   ` Christoph Hellwig
  0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2018-05-09  5:13 UTC (permalink / raw)


Looks fine,

Reviewed-by: Christoph Hellwig <hch at lst.de>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
  2018-05-04  8:02 ` Jianchao Wang
@ 2018-05-09 15:06   ` Sagi Grimberg
  -1 siblings, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2018-05-09 15:06 UTC (permalink / raw)
  To: Jianchao Wang, keith.busch, axboe, hch; +Cc: linux-nvme, linux-kernel



On 05/04/2018 11:02 AM, Jianchao Wang wrote:
> When nvme_init_identify in nvme_rdma_configure_admin_queue fails,
> the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set.
> If nvme_rdma_stop_queue is invoked, we will incur use-after-free
> which will cause memory corruption.
>   BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm]
>   Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304
> 
>   CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G        W         4.17.0-rc3+ #20
>   Workqueue: nvme-delete-wq nvme_delete_ctrl_work
>   Call Trace:
>    dump_stack+0x91/0xeb
>    print_address_description+0x6b/0x290
>    kasan_report+0x261/0x360
>    rdma_disconnect+0x1f/0xe0 [rdma_cm]
>    nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma]
>    nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma]
>    nvme_delete_ctrl_work+0x98/0xe0
>    process_one_work+0x3ca/0xaa0
>    worker_thread+0x4e2/0x6c0
>    kthread+0x18d/0x1e0
>    ret_from_fork+0x24/0x30
> 
> To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0].
> The queue will be freed, so it certainly is not LIVE any more.
> 
> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
> ---
>   drivers/nvme/host/rdma.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index fd965d0..ffbfe82 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
>   	if (new)
>   		nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
>   out_free_queue:
> +	/*
> +	 * The queue will be freed, so it is not LIVE any more.
> +	 * This could avoid use-after-free in nvme_rdma_stop_queue.
> +	 */
> +	clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);
>   	nvme_rdma_free_queue(&ctrl->queues[0]);
>   	return error;
>   }
> 

The correct fix would be to add a tag for stop_queue and call
nvme_rdma_stop_queue() in all the failure cases after
nvme_rdma_start_queue.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
@ 2018-05-09 15:06   ` Sagi Grimberg
  0 siblings, 0 replies; 11+ messages in thread
From: Sagi Grimberg @ 2018-05-09 15:06 UTC (permalink / raw)




On 05/04/2018 11:02 AM, Jianchao Wang wrote:
> When nvme_init_identify in nvme_rdma_configure_admin_queue fails,
> the ctrl->queues[0] is freed but the NVME_RDMA_Q_LIVE is still set.
> If nvme_rdma_stop_queue is invoked, we will incur use-after-free
> which will cause memory corruption.
>   BUG: KASAN: use-after-free in rdma_disconnect+0x1f/0xe0 [rdma_cm]
>   Read of size 8 at addr ffff8801dc3969c0 by task kworker/u16:3/9304
> 
>   CPU: 3 PID: 9304 Comm: kworker/u16:3 Kdump: loaded Tainted: G        W         4.17.0-rc3+ #20
>   Workqueue: nvme-delete-wq nvme_delete_ctrl_work
>   Call Trace:
>    dump_stack+0x91/0xeb
>    print_address_description+0x6b/0x290
>    kasan_report+0x261/0x360
>    rdma_disconnect+0x1f/0xe0 [rdma_cm]
>    nvme_rdma_stop_queue+0x25/0x40 [nvme_rdma]
>    nvme_rdma_shutdown_ctrl+0xf3/0x150 [nvme_rdma]
>    nvme_delete_ctrl_work+0x98/0xe0
>    process_one_work+0x3ca/0xaa0
>    worker_thread+0x4e2/0x6c0
>    kthread+0x18d/0x1e0
>    ret_from_fork+0x24/0x30
> 
> To fix it, clear the NVME_RDMA_Q_LIVE before free the ctrl->queues[0].
> The queue will be freed, so it certainly is not LIVE any more.
> 
> Signed-off-by: Jianchao Wang <jianchao.w.wang at oracle.com>
> ---
>   drivers/nvme/host/rdma.c | 5 +++++
>   1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
> index fd965d0..ffbfe82 100644
> --- a/drivers/nvme/host/rdma.c
> +++ b/drivers/nvme/host/rdma.c
> @@ -812,6 +812,11 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
>   	if (new)
>   		nvme_rdma_free_tagset(&ctrl->ctrl, ctrl->ctrl.admin_tagset);
>   out_free_queue:
> +	/*
> +	 * The queue will be freed, so it is not LIVE any more.
> +	 * This could avoid use-after-free in nvme_rdma_stop_queue.
> +	 */
> +	clear_bit(NVME_RDMA_Q_LIVE, &ctrl->queues[0].flags);
>   	nvme_rdma_free_queue(&ctrl->queues[0]);
>   	return error;
>   }
> 

The correct fix would be to add a tag for stop_queue and call
nvme_rdma_stop_queue() in all the failure cases after
nvme_rdma_start_queue.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
  2018-05-09 15:06   ` Sagi Grimberg
@ 2018-05-16  9:49     ` jianchao.wang
  -1 siblings, 0 replies; 11+ messages in thread
From: jianchao.wang @ 2018-05-16  9:49 UTC (permalink / raw)
  To: Sagi Grimberg, keith.busch, axboe, hch; +Cc: linux-nvme, linux-kernel

Hi Sagi

On 05/09/2018 11:06 PM, Sagi Grimberg wrote:
> The correct fix would be to add a tag for stop_queue and call
> nvme_rdma_stop_queue() in all the failure cases after
> nvme_rdma_start_queue.

Would you please look at the V2 in following link ?
http://lists.infradead.org/pipermail/linux-nvme/2018-May/017330.html

Thanks in advance
Jianchao

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
@ 2018-05-16  9:49     ` jianchao.wang
  0 siblings, 0 replies; 11+ messages in thread
From: jianchao.wang @ 2018-05-16  9:49 UTC (permalink / raw)


Hi Sagi

On 05/09/2018 11:06 PM, Sagi Grimberg wrote:
> The correct fix would be to add a tag for stop_queue and call
> nvme_rdma_stop_queue() in all the failure cases after
> nvme_rdma_start_queue.

Would you please look at the V2 in following link ?
http://lists.infradead.org/pipermail/linux-nvme/2018-May/017330.html

Thanks in advance
Jianchao

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue
  2018-05-16  9:49     ` jianchao.wang
  (?)
@ 2018-05-16 11:48     ` Max Gurtovoy
  -1 siblings, 0 replies; 11+ messages in thread
From: Max Gurtovoy @ 2018-05-16 11:48 UTC (permalink / raw)


Hi Jianchao,

On 5/16/2018 12:49 PM, jianchao.wang wrote:
> Hi Sagi
> 
> On 05/09/2018 11:06 PM, Sagi Grimberg wrote:
>> The correct fix would be to add a tag for stop_queue and call
>> nvme_rdma_stop_queue() in all the failure cases after
>> nvme_rdma_start_queue.
> 
> Would you please look at the V2 in following link ?
> http://lists.infradead.org/pipermail/linux-nvme/2018-May/017330.html

I missed this email for some reason but this patch looks good for me.
Can you bump it in the mailing list and I'll add my signature for review 
there ?

-Max.

> 
> Thanks in advance
> Jianchao
> 
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2018-05-16 11:48 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-04  8:02 [PATCH] nvme-rdma: clear NVME_RDMA_Q_LIVE before free the queue Jianchao Wang
2018-05-04  8:02 ` Jianchao Wang
2018-05-04  9:19 ` Johannes Thumshirn
2018-05-04  9:19   ` Johannes Thumshirn
2018-05-09  5:13 ` Christoph Hellwig
2018-05-09  5:13   ` Christoph Hellwig
2018-05-09 15:06 ` Sagi Grimberg
2018-05-09 15:06   ` Sagi Grimberg
2018-05-16  9:49   ` jianchao.wang
2018-05-16  9:49     ` jianchao.wang
2018-05-16 11:48     ` Max Gurtovoy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.