From: David Stevens <stevensd@chromium.org> To: Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de> Cc: Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, Lu Baolu <baolu.lu@linux.intel.com>, Tom Murphy <murphyt7@tcd.ie>, Rajat Jain <rajatja@google.com>, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, David Stevens <stevensd@chromium.org> Subject: [PATCH v8 0/7] Fixes for dma-iommu swiotlb bounce buffers Date: Wed, 29 Sep 2021 11:32:53 +0900 [thread overview] Message-ID: <20210929023300.335969-1-stevensd@google.com> (raw) From: David Stevens <stevensd@chromium.org> This patch set includes various fixes for dma-iommu's swiotlb bounce buffers for untrusted devices. The min_align_mask issue was found when running fio on an untrusted nvme device with bs=512. The other issues were found via code inspection, so I don't have any specific use cases where things were not working, nor any concrete performance numbers. There are two issues related to min_align_mask that this patch series does not attempt to fix. First, it does not address the case where min_align_mask is larger than the IOVA granule. Doing so requires changes to IOVA allocation, and is not specific to when swiotlb bounce buffers are used. This is not a problem in practice today, since the only driver which uses min_align_mask is nvme, which sets it to 4096. The second issue this series does not address is the fact that extra swiotlb slots adjacent to a bounce buffer can be exposed to untrusted devices whose drivers use min_align_mask. Fixing this requires being able to allocate padding slots at the beginning of a swiotlb allocation. This is a rather significant change that I am not comfortable making. Without being able to handle this, there is also little point to clearing the padding at the start of such a buffer, since we can only clear based on (IO_TLB_SIZE - 1) instead of iova_mask. v7 -> v8: - Rebase on v5.15-rc3 and resolve conflicts with restricted dma v6 -> v7: - Remove unsafe attempt to clear padding at start of swiotlb buffer - Rewrite commit message for min_align_mask commit to better explain the problem it's fixing - Rebase on iommu/core - Acknowledge unsolved issues in cover letter v5 -> v6: - Remove unnecessary line break - Remove redundant config check v4 -> v5: - Fix xen build error - Move _swiotlb refactor into its own patch v3 -> v4: - Fold _swiotlb functions into _page functions - Add patch to align swiotlb buffer to iovad granule - Combine if checks in iommu_dma_sync_sg_* functions v2 -> v3: - Add new patch to address min_align_mask bug - Set SKIP_CPU_SYNC flag after syncing in map/unmap - Properly call arch_sync_dma_for_cpu in iommu_dma_sync_sg_for_cpu v1 -> v2: - Split fixes into dedicated patches - Less invasive changes to fix arch_sync when mapping - Leave dev_is_untrusted check for strict iommu David Stevens (7): dma-iommu: fix sync_sg with swiotlb dma-iommu: fix arch_sync_dma for map dma-iommu: skip extra sync during unmap w/swiotlb dma-iommu: fold _swiotlb helpers into callers dma-iommu: Check CONFIG_SWIOTLB more broadly swiotlb: support aligned swiotlb buffers dma-iommu: account for min_align_mask w/swiotlb drivers/iommu/dma-iommu.c | 188 +++++++++++++++++--------------------- drivers/xen/swiotlb-xen.c | 2 +- include/linux/swiotlb.h | 3 +- kernel/dma/swiotlb.c | 13 ++- 4 files changed, 94 insertions(+), 112 deletions(-) -- 2.33.0.685.g46640cef36-goog
WARNING: multiple messages have this Message-ID (diff)
From: David Stevens <stevensd@chromium.org> To: Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de> Cc: linux-kernel@vger.kernel.org, Tom Murphy <murphyt7@tcd.ie>, iommu@lists.linux-foundation.org, David Stevens <stevensd@chromium.org>, Rajat Jain <rajatja@google.com>, Will Deacon <will@kernel.org> Subject: [PATCH v8 0/7] Fixes for dma-iommu swiotlb bounce buffers Date: Wed, 29 Sep 2021 11:32:53 +0900 [thread overview] Message-ID: <20210929023300.335969-1-stevensd@google.com> (raw) From: David Stevens <stevensd@chromium.org> This patch set includes various fixes for dma-iommu's swiotlb bounce buffers for untrusted devices. The min_align_mask issue was found when running fio on an untrusted nvme device with bs=512. The other issues were found via code inspection, so I don't have any specific use cases where things were not working, nor any concrete performance numbers. There are two issues related to min_align_mask that this patch series does not attempt to fix. First, it does not address the case where min_align_mask is larger than the IOVA granule. Doing so requires changes to IOVA allocation, and is not specific to when swiotlb bounce buffers are used. This is not a problem in practice today, since the only driver which uses min_align_mask is nvme, which sets it to 4096. The second issue this series does not address is the fact that extra swiotlb slots adjacent to a bounce buffer can be exposed to untrusted devices whose drivers use min_align_mask. Fixing this requires being able to allocate padding slots at the beginning of a swiotlb allocation. This is a rather significant change that I am not comfortable making. Without being able to handle this, there is also little point to clearing the padding at the start of such a buffer, since we can only clear based on (IO_TLB_SIZE - 1) instead of iova_mask. v7 -> v8: - Rebase on v5.15-rc3 and resolve conflicts with restricted dma v6 -> v7: - Remove unsafe attempt to clear padding at start of swiotlb buffer - Rewrite commit message for min_align_mask commit to better explain the problem it's fixing - Rebase on iommu/core - Acknowledge unsolved issues in cover letter v5 -> v6: - Remove unnecessary line break - Remove redundant config check v4 -> v5: - Fix xen build error - Move _swiotlb refactor into its own patch v3 -> v4: - Fold _swiotlb functions into _page functions - Add patch to align swiotlb buffer to iovad granule - Combine if checks in iommu_dma_sync_sg_* functions v2 -> v3: - Add new patch to address min_align_mask bug - Set SKIP_CPU_SYNC flag after syncing in map/unmap - Properly call arch_sync_dma_for_cpu in iommu_dma_sync_sg_for_cpu v1 -> v2: - Split fixes into dedicated patches - Less invasive changes to fix arch_sync when mapping - Leave dev_is_untrusted check for strict iommu David Stevens (7): dma-iommu: fix sync_sg with swiotlb dma-iommu: fix arch_sync_dma for map dma-iommu: skip extra sync during unmap w/swiotlb dma-iommu: fold _swiotlb helpers into callers dma-iommu: Check CONFIG_SWIOTLB more broadly swiotlb: support aligned swiotlb buffers dma-iommu: account for min_align_mask w/swiotlb drivers/iommu/dma-iommu.c | 188 +++++++++++++++++--------------------- drivers/xen/swiotlb-xen.c | 2 +- include/linux/swiotlb.h | 3 +- kernel/dma/swiotlb.c | 13 ++- 4 files changed, 94 insertions(+), 112 deletions(-) -- 2.33.0.685.g46640cef36-goog _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next reply other threads:[~2021-09-29 2:33 UTC|newest] Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-09-29 2:32 David Stevens [this message] 2021-09-29 2:32 ` [PATCH v8 0/7] Fixes for dma-iommu swiotlb bounce buffers David Stevens 2021-09-29 2:32 ` [PATCH v8 1/7] dma-iommu: fix sync_sg with swiotlb David Stevens 2021-09-29 2:32 ` David Stevens 2021-09-29 2:32 ` [PATCH v8 2/7] dma-iommu: fix arch_sync_dma for map David Stevens 2021-09-29 2:32 ` David Stevens 2021-09-29 2:32 ` [PATCH v8 3/7] dma-iommu: skip extra sync during unmap w/swiotlb David Stevens 2021-09-29 2:32 ` David Stevens 2021-09-29 2:32 ` [PATCH v8 4/7] dma-iommu: fold _swiotlb helpers into callers David Stevens 2021-09-29 2:32 ` David Stevens 2021-09-29 2:32 ` [PATCH v8 5/7] dma-iommu: Check CONFIG_SWIOTLB more broadly David Stevens 2021-09-29 2:32 ` David Stevens 2021-09-29 2:32 ` [PATCH v8 6/7] swiotlb: support aligned swiotlb buffers David Stevens 2021-09-29 2:32 ` David Stevens 2021-09-29 2:33 ` [PATCH v8 7/7] dma-iommu: account for min_align_mask w/swiotlb David Stevens 2021-09-29 2:33 ` David Stevens
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210929023300.335969-1-stevensd@google.com \ --to=stevensd@chromium.org \ --cc=baolu.lu@linux.intel.com \ --cc=hch@lst.de \ --cc=iommu@lists.linux-foundation.org \ --cc=joro@8bytes.org \ --cc=linux-kernel@vger.kernel.org \ --cc=murphyt7@tcd.ie \ --cc=rajatja@google.com \ --cc=robin.murphy@arm.com \ --cc=will@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.