* make dma_addressing_limited work for memory encryption setups @ 2019-11-27 14:40 Christoph Hellwig 2019-11-27 14:40 ` [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line Christoph Hellwig 2019-11-27 14:40 ` [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited Christoph Hellwig 0 siblings, 2 replies; 10+ messages in thread From: Christoph Hellwig @ 2019-11-27 14:40 UTC (permalink / raw) To: Thomas Hellstrom Cc: Christian König, Tom Lendacky, iommu, linux-mm, linux-kernel Hi all, this little series fixes dma_addressing_limited to return true for systems that use bounce buffers due to memory encryption. ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line 2019-11-27 14:40 make dma_addressing_limited work for memory encryption setups Christoph Hellwig @ 2019-11-27 14:40 ` Christoph Hellwig 2019-11-27 17:13 ` Matthew Wilcox 2019-11-27 14:40 ` [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited Christoph Hellwig 1 sibling, 1 reply; 10+ messages in thread From: Christoph Hellwig @ 2019-11-27 14:40 UTC (permalink / raw) To: Thomas Hellstrom Cc: Christian König, Tom Lendacky, iommu, linux-mm, linux-kernel This function isn't used in the fast path, and moving it out of line will reduce include clutter with the next change. Signed-off-by: Christoph Hellwig <hch@lst.de> --- include/linux/dma-mapping.h | 14 +------------- kernel/dma/mapping.c | 15 +++++++++++++++ 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index c4d8741264bd..94ef74ecd18a 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -687,19 +687,7 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask) return dma_set_mask_and_coherent(dev, mask); } -/** - * dma_addressing_limited - return if the device is addressing limited - * @dev: device to check - * - * Return %true if the devices DMA mask is too small to address all memory in - * the system, else %false. Lack of addressing bits is the prime reason for - * bounce buffering, but might not be the only one. - */ -static inline bool dma_addressing_limited(struct device *dev) -{ - return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < - dma_get_required_mask(dev); -} +bool dma_addressing_limited(struct device *dev); #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 12ff766ec1fa..1dbe6d725962 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -405,3 +405,18 @@ unsigned long dma_get_merge_boundary(struct device *dev) return ops->get_merge_boundary(dev); } EXPORT_SYMBOL_GPL(dma_get_merge_boundary); + +/** + * dma_addressing_limited - return if the device is addressing limited + * @dev: device to check + * + * Return %true if the devices DMA mask is too small to address all memory in + * the system, else %false. Lack of addressing bits is the prime reason for + * bounce buffering, but might not be the only one. + */ +bool dma_addressing_limited(struct device *dev) +{ + return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < + dma_get_required_mask(dev); +} +EXPORT_SYMBOL_GPL(dma_addressing_limited); -- 2.20.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line 2019-11-27 14:40 ` [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line Christoph Hellwig @ 2019-11-27 17:13 ` Matthew Wilcox 0 siblings, 0 replies; 10+ messages in thread From: Matthew Wilcox @ 2019-11-27 17:13 UTC (permalink / raw) To: Christoph Hellwig Cc: Thomas Hellstrom, Christian König, Tom Lendacky, iommu, linux-mm, linux-kernel On Wed, Nov 27, 2019 at 03:40:05PM +0100, Christoph Hellwig wrote: > +/** > + * dma_addressing_limited - return if the device is addressing limited > + * @dev: device to check > + * > + * Return %true if the devices DMA mask is too small to address all memory in Could I trouble you to use a : after Return? That turns it into its own section rather than making it part of the generic description. ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited 2019-11-27 14:40 make dma_addressing_limited work for memory encryption setups Christoph Hellwig 2019-11-27 14:40 ` [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line Christoph Hellwig @ 2019-11-27 14:40 ` Christoph Hellwig 2019-11-27 18:22 ` Thomas Hellstrom 1 sibling, 1 reply; 10+ messages in thread From: Christoph Hellwig @ 2019-11-27 14:40 UTC (permalink / raw) To: Thomas Hellstrom Cc: Christian König, Tom Lendacky, iommu, linux-mm, linux-kernel Devices that are forced to DMA through unencrypted bounce buffers need to be treated as if they are addressing limited. Signed-off-by: Christoph Hellwig <hch@lst.de> --- kernel/dma/mapping.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 1dbe6d725962..f6c35b53d996 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -416,6 +416,8 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary); */ bool dma_addressing_limited(struct device *dev) { + if (force_dma_unencrypted(dev)) + return true; return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < dma_get_required_mask(dev); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited 2019-11-27 14:40 ` [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited Christoph Hellwig @ 2019-11-27 18:22 ` Thomas Hellstrom 2019-11-28 7:51 ` hch 0 siblings, 1 reply; 10+ messages in thread From: Thomas Hellstrom @ 2019-11-27 18:22 UTC (permalink / raw) To: hch; +Cc: thomas.lendacky, linux-kernel, christian.koenig, linux-mm, iommu Hi, On Wed, 2019-11-27 at 15:40 +0100, Christoph Hellwig wrote: > Devices that are forced to DMA through unencrypted bounce buffers > need to be treated as if they are addressing limited. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > kernel/dma/mapping.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c > index 1dbe6d725962..f6c35b53d996 100644 > --- a/kernel/dma/mapping.c > +++ b/kernel/dma/mapping.c > @@ -416,6 +416,8 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary); > */ > bool dma_addressing_limited(struct device *dev) > { > + if (force_dma_unencrypted(dev)) > + return true; > return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < > dma_get_required_mask(dev); > } Any chance to have the case (swiotlb_force == SWIOTLB_FORCE) also included? Otherwise for the series Reviewed-by: Thomas Hellström <thellstrom@vmware.com> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited 2019-11-27 18:22 ` Thomas Hellstrom @ 2019-11-28 7:51 ` hch 2019-11-28 8:02 ` Thomas Hellstrom 0 siblings, 1 reply; 10+ messages in thread From: hch @ 2019-11-28 7:51 UTC (permalink / raw) To: Thomas Hellstrom Cc: hch, thomas.lendacky, linux-kernel, christian.koenig, linux-mm, iommu, Konrad Rzeszutek Wilk On Wed, Nov 27, 2019 at 06:22:57PM +0000, Thomas Hellstrom wrote: > > bool dma_addressing_limited(struct device *dev) > > { > > + if (force_dma_unencrypted(dev)) > > + return true; > > return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < > > dma_get_required_mask(dev); > > } > > Any chance to have the case > > (swiotlb_force == SWIOTLB_FORCE) > > also included? We have a hard time handling that in generic code. Do we have any good use case for SWIOTLB_FORCE not that we have force_dma_unencrypted? I'd love to be able to get rid of it.. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited 2019-11-28 7:51 ` hch @ 2019-11-28 8:02 ` Thomas Hellstrom 2019-11-28 15:36 ` hch 0 siblings, 1 reply; 10+ messages in thread From: Thomas Hellstrom @ 2019-11-28 8:02 UTC (permalink / raw) To: hch Cc: thomas.lendacky, linux-kernel, christian.koenig, linux-mm, iommu, Konrad Rzeszutek Wilk On 11/28/19 8:51 AM, hch@lst.de wrote: > On Wed, Nov 27, 2019 at 06:22:57PM +0000, Thomas Hellstrom wrote: >>> bool dma_addressing_limited(struct device *dev) >>> { >>> + if (force_dma_unencrypted(dev)) >>> + return true; >>> return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < >>> dma_get_required_mask(dev); >>> } >> Any chance to have the case >> >> (swiotlb_force == SWIOTLB_FORCE) >> >> also included? > We have a hard time handling that in generic code. Do we have any > good use case for SWIOTLB_FORCE not that we have force_dma_unencrypted? > I'd love to be able to get rid of it.. > IIRC the justification for it is debugging. Drivers that don't do syncing correctly or have incorrect assumptions of initialization of DMA memory will not work properly when SWIOTLB is forced. We recently found a vmw_pvscsi device flaw that way... /Thomas ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited 2019-11-28 8:02 ` Thomas Hellstrom @ 2019-11-28 15:36 ` hch 0 siblings, 0 replies; 10+ messages in thread From: hch @ 2019-11-28 15:36 UTC (permalink / raw) To: Thomas Hellstrom Cc: hch, thomas.lendacky, linux-kernel, christian.koenig, linux-mm, iommu, Konrad Rzeszutek Wilk On Thu, Nov 28, 2019 at 08:02:16AM +0000, Thomas Hellstrom wrote: > > We have a hard time handling that in generic code. Do we have any > > good use case for SWIOTLB_FORCE not that we have force_dma_unencrypted? > > I'd love to be able to get rid of it.. > > > IIRC the justification for it is debugging. Drivers that don't do > syncing correctly or have incorrect assumptions of initialization of DMA > memory will not work properly when SWIOTLB is forced. We recently found > a vmw_pvscsi device flaw that way... Ok. I guess debugging is reasonable. Although that means I need to repsin this quite a bit as I now need a callout to dma_direct. I'll respin it in the next days. ^ permalink raw reply [flat|nested] 10+ messages in thread
* make dma_addressing_limited work for memory encryption setups v2 @ 2019-12-04 13:03 Christoph Hellwig 2019-12-04 13:03 ` [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited Christoph Hellwig 0 siblings, 1 reply; 10+ messages in thread From: Christoph Hellwig @ 2019-12-04 13:03 UTC (permalink / raw) To: Thomas Hellstrom Cc: Christian König, Tom Lendacky, iommu, linux-mm, linux-kernel Hi all, this little series fixes dma_addressing_limited to return true for systems that use bounce buffers due to memory encryption. Changes since v1: - take SWIOTLB_FORCE into account ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited 2019-12-04 13:03 make dma_addressing_limited work for memory encryption setups v2 Christoph Hellwig @ 2019-12-04 13:03 ` Christoph Hellwig 2019-12-06 14:10 ` Thomas Hellstrom 0 siblings, 1 reply; 10+ messages in thread From: Christoph Hellwig @ 2019-12-04 13:03 UTC (permalink / raw) To: Thomas Hellstrom Cc: Christian König, Tom Lendacky, iommu, linux-mm, linux-kernel Devices that are forced to DMA through swiotlb need to be treated as if they are addressing limited. Signed-off-by: Christoph Hellwig <hch@lst.de> --- include/linux/dma-direct.h | 1 + kernel/dma/direct.c | 8 ++++++-- kernel/dma/mapping.c | 3 +++ 3 files changed, 10 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index 24b8684aa21d..83aac21434c6 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -85,4 +85,5 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs); int dma_direct_supported(struct device *dev, u64 mask); +bool dma_direct_addressing_limited(struct device *dev); #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 6af7ae83c4ad..450f3abe5cb5 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -497,11 +497,15 @@ int dma_direct_supported(struct device *dev, u64 mask) return mask >= __phys_to_dma(dev, min_mask); } +bool dma_direct_addressing_limited(struct device *dev) +{ + return force_dma_unencrypted(dev) || swiotlb_force == SWIOTLB_FORCE; +} + size_t dma_direct_max_mapping_size(struct device *dev) { /* If SWIOTLB is active, use its maximum mapping size */ - if (is_swiotlb_active() && - (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE)) + if (is_swiotlb_active() && dma_addressing_limited(dev)) return swiotlb_max_mapping_size(dev); return SIZE_MAX; } diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 1dbe6d725962..ebc60633d89a 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -416,6 +416,9 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary); */ bool dma_addressing_limited(struct device *dev) { + if (dma_is_direct(get_dma_ops(dev)) && + dma_direct_addressing_limited(dev)) + return true; return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < dma_get_required_mask(dev); } -- 2.20.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited 2019-12-04 13:03 ` [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited Christoph Hellwig @ 2019-12-06 14:10 ` Thomas Hellstrom 0 siblings, 0 replies; 10+ messages in thread From: Thomas Hellstrom @ 2019-12-06 14:10 UTC (permalink / raw) To: hch, christian.koenig; +Cc: thomas.lendacky, linux-kernel, linux-mm, iommu Hi, Christoph. On Wed, 2019-12-04 at 14:03 +0100, Christoph Hellwig wrote: > Devices that are forced to DMA through swiotlb need to be treated as > if > they are addressing limited. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > include/linux/dma-direct.h | 1 + > kernel/dma/direct.c | 8 ++++++-- > kernel/dma/mapping.c | 3 +++ > 3 files changed, 10 insertions(+), 2 deletions(-) > > diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h > index 24b8684aa21d..83aac21434c6 100644 > --- a/include/linux/dma-direct.h > +++ b/include/linux/dma-direct.h > @@ -85,4 +85,5 @@ int dma_direct_mmap(struct device *dev, struct > vm_area_struct *vma, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs); > int dma_direct_supported(struct device *dev, u64 mask); > +bool dma_direct_addressing_limited(struct device *dev); > #endif /* _LINUX_DMA_DIRECT_H */ > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index 6af7ae83c4ad..450f3abe5cb5 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -497,11 +497,15 @@ int dma_direct_supported(struct device *dev, > u64 mask) > return mask >= __phys_to_dma(dev, min_mask); > } > > +bool dma_direct_addressing_limited(struct device *dev) > +{ > + return force_dma_unencrypted(dev) || swiotlb_force == > SWIOTLB_FORCE; > +} > + > size_t dma_direct_max_mapping_size(struct device *dev) > { > /* If SWIOTLB is active, use its maximum mapping size */ > - if (is_swiotlb_active() && > - (dma_addressing_limited(dev) || swiotlb_force == > SWIOTLB_FORCE)) > + if (is_swiotlb_active() && dma_addressing_limited(dev)) > return swiotlb_max_mapping_size(dev); > return SIZE_MAX; > } > diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c > index 1dbe6d725962..ebc60633d89a 100644 > --- a/kernel/dma/mapping.c > +++ b/kernel/dma/mapping.c > @@ -416,6 +416,9 @@ EXPORT_SYMBOL_GPL(dma_get_merge_boundary); > */ > bool dma_addressing_limited(struct device *dev) > { > + if (dma_is_direct(get_dma_ops(dev)) && > + dma_direct_addressing_limited(dev)) > + return true; This works fine for vmwgfx, for which the below expression is always 0. But it looks like the only current user of dma_addressing_limited outside of the dma code, radeon, actually wants only the below expression to force GFP_DMA32 page allocations when the devices have limited dma address space. Perhaps Christian can elaborate on that. So in the end it looks like we have two different use cases. One to force coherent memory (vmwgfx, possibly other grahpics drivers) or reduced queue depth (vmw_pvscsi) when we have bounce-buffering. The other one is to force GFP_DMA32 page allocation when the device dma-addressing is limited. Perhaps this mode can be replaced by using dma_coherent memory and stripped that functionality from TTM? > return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) < > dma_get_required_mask(dev); > } Thanks, Thomas ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2019-12-06 14:11 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-11-27 14:40 make dma_addressing_limited work for memory encryption setups Christoph Hellwig 2019-11-27 14:40 ` [PATCH 1/2] dma-mapping: move dma_addressing_limited out of line Christoph Hellwig 2019-11-27 17:13 ` Matthew Wilcox 2019-11-27 14:40 ` [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited Christoph Hellwig 2019-11-27 18:22 ` Thomas Hellstrom 2019-11-28 7:51 ` hch 2019-11-28 8:02 ` Thomas Hellstrom 2019-11-28 15:36 ` hch 2019-12-04 13:03 make dma_addressing_limited work for memory encryption setups v2 Christoph Hellwig 2019-12-04 13:03 ` [PATCH 2/2] dma-mapping: force unencryped devices are always addressing limited Christoph Hellwig 2019-12-06 14:10 ` Thomas Hellstrom
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).