iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH, RFC] swiotlb-xen: fix dma to physical address translation for cache operations
@ 2023-06-07 13:25 Christoph Hellwig
  2023-07-08 23:59 ` Stefano Stabellini
  0 siblings, 1 reply; 2+ messages in thread
From: Christoph Hellwig @ 2023-06-07 13:25 UTC (permalink / raw)
  To: jgross, sstabellini, oleksandr_tyshchenko; +Cc: xen-devel, iommu

All other places in swiotlb-xen got from PFN to BFN and then call
phys_to_dma on the result or vice versa, but the reverse mapping used
for cache maintenance skips the BFN to PFN mapping.

[Note: only found by code inspection, please review very carefully!]

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/xen/swiotlb-xen.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 67aa74d201627d..e4620303138b4d 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -234,7 +234,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 
 done:
 	if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr))))
+		if (pfn_valid(PFN_DOWN(phys)))
 			arch_sync_dma_for_device(phys, size, dir);
 		else
 			xen_dma_sync_for_device(dev, dev_addr, size, dir);
@@ -258,7 +258,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 	BUG_ON(dir == DMA_NONE);
 
 	if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr))))
+		if (pfn_valid(PFN_DOWN(paddr)))
 			arch_sync_dma_for_cpu(paddr, size, dir);
 		else
 			xen_dma_sync_for_cpu(hwdev, dev_addr, size, dir);
@@ -276,7 +276,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
 
 	if (!dev_is_dma_coherent(dev)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
+		if (pfn_valid(PFN_DOWN(paddr)))
 			arch_sync_dma_for_cpu(paddr, size, dir);
 		else
 			xen_dma_sync_for_cpu(dev, dma_addr, size, dir);
@@ -296,7 +296,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev)) {
-		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
+		if (pfn_valid(PFN_DOWN(paddr)))
 			arch_sync_dma_for_device(paddr, size, dir);
 		else
 			xen_dma_sync_for_device(dev, dma_addr, size, dir);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH, RFC] swiotlb-xen: fix dma to physical address translation for cache operations
  2023-06-07 13:25 [PATCH, RFC] swiotlb-xen: fix dma to physical address translation for cache operations Christoph Hellwig
@ 2023-07-08 23:59 ` Stefano Stabellini
  0 siblings, 0 replies; 2+ messages in thread
From: Stefano Stabellini @ 2023-07-08 23:59 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: jgross, sstabellini, oleksandr_tyshchenko, xen-devel, iommu

On Wed, 7 Jun 2023, Christoph Hellwig wrote:
> All other places in swiotlb-xen got from PFN to BFN and then call
> phys_to_dma on the result or vice versa, but the reverse mapping used
> for cache maintenance skips the BFN to PFN mapping.
> 
> [Note: only found by code inspection, please review very carefully!]
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Hi Christoph,

No, these checks are done on purpose differently. Let me explain.


xen_phys_to_dma does 2 translations:
1) guest physical to real physical (also called mfn in Xen)
2) real physical to dma

2) is not interesting in this discussion, 1) is.

Dom0 is mapped 1:1 guest physical == real physical by default on ARM.

For any Dom0 memory page:
- guest physical == real physical

Dom0 sometimes maps memory of other domains. By definitions, those are
not 1:1 mapped. For those:
- guest physical != real physical

Linux normally only knows about guest physical addresses, not real
physical addresses.


The checks below are meant to check if a given page is a regular Dom0
page, or a page of another domain. The check relies on the fact that if
it is a local Dom0 page then (guest physical == real physical) and
pfn_valid return true, otherwise if it not a local Dom0 page then (guest
physical != real physical) and pfn_valid return false.

Basically the check is doing:

  pfn_valid(real phys address)

In the local case:

  pfn_valid(real phys address) == pfn_valid(guest phys address) => true

In the foreign case:

  pfn_valid(real phys address) != pfn_valid(guest phys address) => false


Note that pfn_valid(guest phys address) would return true in both cases
because the foreign page is mapped over a valid Dom0 guest physical
address.


In short, it is a simple trick to detect if the address corresponds to a
regular Dom0 memory page or not.

Cheers,

Stefano

> ---
>  drivers/xen/swiotlb-xen.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 67aa74d201627d..e4620303138b4d 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -234,7 +234,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
>  
>  done:
>  	if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
> -		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr))))
> +		if (pfn_valid(PFN_DOWN(phys)))
>  			arch_sync_dma_for_device(phys, size, dir);
>  		else
>  			xen_dma_sync_for_device(dev, dev_addr, size, dir);
> @@ -258,7 +258,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
>  	BUG_ON(dir == DMA_NONE);
>  
>  	if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
> -		if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr))))
> +		if (pfn_valid(PFN_DOWN(paddr)))
>  			arch_sync_dma_for_cpu(paddr, size, dir);
>  		else
>  			xen_dma_sync_for_cpu(hwdev, dev_addr, size, dir);
> @@ -276,7 +276,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
>  	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
>  
>  	if (!dev_is_dma_coherent(dev)) {
> -		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
> +		if (pfn_valid(PFN_DOWN(paddr)))
>  			arch_sync_dma_for_cpu(paddr, size, dir);
>  		else
>  			xen_dma_sync_for_cpu(dev, dma_addr, size, dir);
> @@ -296,7 +296,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
>  		swiotlb_sync_single_for_device(dev, paddr, size, dir);
>  
>  	if (!dev_is_dma_coherent(dev)) {
> -		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
> +		if (pfn_valid(PFN_DOWN(paddr)))
>  			arch_sync_dma_for_device(paddr, size, dir);
>  		else
>  			xen_dma_sync_for_device(dev, dma_addr, size, dir);
> -- 
> 2.39.2
> 

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-07-08 23:59 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-07 13:25 [PATCH, RFC] swiotlb-xen: fix dma to physical address translation for cache operations Christoph Hellwig
2023-07-08 23:59 ` Stefano Stabellini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).