From: Robin Murphy <robin.murphy@arm.com> To: joro@8bytes.org, will@kernel.org Cc: iommu@lists.linux-foundation.org, suravee.suthikulpanit@amd.com, baolu.lu@linux.intel.com, willy@infradead.org, linux-kernel@vger.kernel.org, john.garry@huawei.com, linux-mm@kvack.org, hch@lst.de, Xiongfeng Wang <wangxiongfeng2@huawei.com> Subject: [PATCH v3 1/9] iommu/iova: Fix race between FQ timeout and teardown Date: Fri, 17 Dec 2021 15:30:55 +0000 [thread overview] Message-ID: <0a365e5b07f14b7344677ad6a9a734966a8422ce.1639753638.git.robin.murphy@arm.com> (raw) In-Reply-To: <cover.1639753638.git.robin.murphy@arm.com> From: Xiongfeng Wang <wangxiongfeng2@huawei.com> It turns out to be possible for hotplugging out a device to reach the stage of tearing down the device's group and default domain before the domain's flush queue has drained naturally. At this point, it is then possible for the timeout to expire just before the del_timer() call in free_iova_flush_queue(), such that we then proceed to free the FQ resources while fq_flush_timeout() is still accessing them on another CPU. Crashes due to this have been observed in the wild while removing NVMe devices. Close the race window by using del_timer_sync() to safely wait for any active timeout handler to finish before we start to free things. We already avoid any locking in free_iova_flush_queue() since the FQ is supposed to be inactive anyway, so the potential deadlock scenario does not apply. Fixes: 9a005a800ae8 ("iommu/iova: Add flush timer") Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> [ rm: rewrite commit message ] Signed-off-by: Robin Murphy <robin.murphy@arm.com> --- drivers/iommu/iova.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 9e8bc802ac05..920fcc27c9a1 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -83,8 +83,7 @@ static void free_iova_flush_queue(struct iova_domain *iovad) if (!has_iova_flush_queue(iovad)) return; - if (timer_pending(&iovad->fq_timer)) - del_timer(&iovad->fq_timer); + del_timer_sync(&iovad->fq_timer); fq_destroy_all_entries(iovad); -- 2.28.0.dirty
WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com> To: joro@8bytes.org, will@kernel.org Cc: linux-kernel@vger.kernel.org, willy@infradead.org, Xiongfeng Wang <wangxiongfeng2@huawei.com>, linux-mm@kvack.org, iommu@lists.linux-foundation.org, hch@lst.de Subject: [PATCH v3 1/9] iommu/iova: Fix race between FQ timeout and teardown Date: Fri, 17 Dec 2021 15:30:55 +0000 [thread overview] Message-ID: <0a365e5b07f14b7344677ad6a9a734966a8422ce.1639753638.git.robin.murphy@arm.com> (raw) In-Reply-To: <cover.1639753638.git.robin.murphy@arm.com> From: Xiongfeng Wang <wangxiongfeng2@huawei.com> It turns out to be possible for hotplugging out a device to reach the stage of tearing down the device's group and default domain before the domain's flush queue has drained naturally. At this point, it is then possible for the timeout to expire just before the del_timer() call in free_iova_flush_queue(), such that we then proceed to free the FQ resources while fq_flush_timeout() is still accessing them on another CPU. Crashes due to this have been observed in the wild while removing NVMe devices. Close the race window by using del_timer_sync() to safely wait for any active timeout handler to finish before we start to free things. We already avoid any locking in free_iova_flush_queue() since the FQ is supposed to be inactive anyway, so the potential deadlock scenario does not apply. Fixes: 9a005a800ae8 ("iommu/iova: Add flush timer") Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> [ rm: rewrite commit message ] Signed-off-by: Robin Murphy <robin.murphy@arm.com> --- drivers/iommu/iova.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 9e8bc802ac05..920fcc27c9a1 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -83,8 +83,7 @@ static void free_iova_flush_queue(struct iova_domain *iovad) if (!has_iova_flush_queue(iovad)) return; - if (timer_pending(&iovad->fq_timer)) - del_timer(&iovad->fq_timer); + del_timer_sync(&iovad->fq_timer); fq_destroy_all_entries(iovad); -- 2.28.0.dirty _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-12-17 15:31 UTC|newest] Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-12-17 15:30 [PATCH v3 0/9] iommu: refactor flush queues into iommu-dma Robin Murphy 2021-12-17 15:30 ` Robin Murphy 2021-12-17 15:30 ` Robin Murphy [this message] 2021-12-17 15:30 ` [PATCH v3 1/9] iommu/iova: Fix race between FQ timeout and teardown Robin Murphy 2021-12-17 15:30 ` [PATCH v3 2/9] iommu/iova: Squash entry_dtor abstraction Robin Murphy 2021-12-17 15:30 ` Robin Murphy 2021-12-17 15:30 ` [PATCH v3 3/9] iommu/iova: Squash flush_cb abstraction Robin Murphy 2021-12-17 15:30 ` Robin Murphy 2021-12-17 15:30 ` [PATCH v3 4/9] iommu/amd: Simplify pagetable freeing Robin Murphy 2021-12-17 15:30 ` Robin Murphy 2021-12-17 15:30 ` [PATCH v3 5/9] iommu/amd: Use put_pages_list Robin Murphy 2021-12-17 15:30 ` Robin Murphy 2021-12-17 15:31 ` [PATCH v3 6/9] iommu/vt-d: " Robin Murphy 2021-12-17 15:31 ` Robin Murphy 2021-12-20 1:18 ` Lu Baolu 2021-12-20 1:18 ` Lu Baolu 2021-12-17 15:31 ` [PATCH v3 7/9] iommu/iova: Consolidate flush queue code Robin Murphy 2021-12-17 15:31 ` Robin Murphy 2021-12-17 15:31 ` [PATCH v3 8/9] iommu/iova: Move flush queue code to iommu-dma Robin Murphy 2021-12-17 15:31 ` Robin Murphy 2021-12-17 15:31 ` [PATCH v3 9/9] iommu: Move flush queue data into iommu_dma_cookie Robin Murphy 2021-12-17 15:31 ` Robin Murphy 2021-12-20 8:03 ` [PATCH v3 0/9] iommu: refactor flush queues into iommu-dma Joerg Roedel 2021-12-20 8:03 ` Joerg Roedel
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=0a365e5b07f14b7344677ad6a9a734966a8422ce.1639753638.git.robin.murphy@arm.com \ --to=robin.murphy@arm.com \ --cc=baolu.lu@linux.intel.com \ --cc=hch@lst.de \ --cc=iommu@lists.linux-foundation.org \ --cc=john.garry@huawei.com \ --cc=joro@8bytes.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=suravee.suthikulpanit@amd.com \ --cc=wangxiongfeng2@huawei.com \ --cc=will@kernel.org \ --cc=willy@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.