All of lore.kernel.org
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Andi Kleen <ak@linux.intel.com>, mst@redhat.com
Cc: jasowang@redhat.com, virtualization@lists.linux-foundation.org,
	hch@lst.de, m.szyprowski@samsung.com,
	iommu@lists.linux-foundation.org, x86@kernel.org,
	sathyanarayanan.kuppuswamy@linux.intel.com, jpoimboe@redhat.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1 6/8] dma: Add return value to dma_unmap_page
Date: Thu, 3 Jun 2021 10:08:29 +0100	[thread overview]
Message-ID: <c3b15bc2-104b-dace-1f23-608197b18107@arm.com> (raw)
In-Reply-To: <20210603004133.4079390-7-ak@linux.intel.com>

Hi Andi,

On 2021-06-03 01:41, Andi Kleen wrote:
> In some situations when we know swiotlb is forced and we have
> to deal with untrusted hosts, it's useful to know if a mapping
> was in the swiotlb or not. This allows us to abort any IO
> operation that would access memory outside the swiotlb.
> 
> Otherwise it might be possible for a malicious host to inject
> any guest page in a read operation. While it couldn't directly
> access the results of the read() inside the guest, there
> might scenarios where data is echoed back with a write(),
> and that would then leak guest memory.
> 
> Add a return value to dma_unmap_single/page. Most users
> of course will ignore it. The return value is set to EIO
> if we're in forced swiotlb mode and the buffer is not inside
> the swiotlb buffer. Otherwise it's always 0.

I have to say my first impression of this isn't too good :(

What it looks like to me is abusing SWIOTLB's internal housekeeping to 
keep track of virtio-specific state. The DMA API does not attempt to 
validate calls in general since in many cases the additional overhead 
would be prohibitive. It has always been callers' responsibility to keep 
track of what they mapped and make sure sync/unmap calls match, and 
there are many, many, subtle and not-so-subtle ways for things to go 
wrong if they don't. If virtio is not doing a good enough job of that, 
what's the justification for making it the DMA API's problem?

> A new callback is used to avoid changing all the IOMMU drivers.

Nit: presumably by "IOMMU drivers" you actually mean arch DMA API backends?

As an aside, we'll take a look at the rest of the series for the 
perspective of our prototyping for Arm's Confidential Compute 
Architecture, but I'm not sure we'll need it, since accesses beyond the 
bounds of the shared SWIOTLB buffer shouldn't be an issue for us. 
Furthermore, AFAICS it's still not going to help against exfiltrating 
guest memory by over-unmapping the original SWIOTLB slot *without* going 
past the end of the whole buffer, but I think Martin's patch *has* 
addressed that already.

Robin.

> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>   drivers/iommu/dma-iommu.c   | 17 +++++++++++------
>   include/linux/dma-map-ops.h |  3 +++
>   include/linux/dma-mapping.h |  7 ++++---
>   kernel/dma/mapping.c        |  6 +++++-
>   4 files changed, 23 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 7ef13198721b..babe46f2ae3a 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -491,7 +491,8 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
>   	iommu_dma_free_iova(cookie, dma_addr, size, iotlb_gather.freelist);
>   }
>   
> -static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
> +static int __iommu_dma_unmap_swiotlb_check(struct device *dev,
> +		dma_addr_t dma_addr,
>   		size_t size, enum dma_data_direction dir,
>   		unsigned long attrs)
>   {
> @@ -500,12 +501,15 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
>   
>   	phys = iommu_iova_to_phys(domain, dma_addr);
>   	if (WARN_ON(!phys))
> -		return;
> +		return -EIO;
>   
>   	__iommu_dma_unmap(dev, dma_addr, size);
>   
>   	if (unlikely(is_swiotlb_buffer(phys, size)))
>   		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> +	else if (swiotlb_force == SWIOTLB_FORCE)
> +		return -EIO;
> +	return 0;
>   }
>   
>   static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> @@ -856,12 +860,13 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
>   	return dma_handle;
>   }
>   
> -static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
> +static int iommu_dma_unmap_page_check(struct device *dev, dma_addr_t dma_handle,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>   		iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir);
> -	__iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs);
> +	return __iommu_dma_unmap_swiotlb_check(dev, dma_handle, size, dir,
> +					       attrs);
>   }
>   
>   /*
> @@ -946,7 +951,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device *dev, struct scatterlist *s
>   	int i;
>   
>   	for_each_sg(sg, s, nents, i)
> -		__iommu_dma_unmap_swiotlb(dev, sg_dma_address(s),
> +		__iommu_dma_unmap_swiotlb_check(dev, sg_dma_address(s),
>   				sg_dma_len(s), dir, attrs);
>   }
>   
> @@ -1291,7 +1296,7 @@ static const struct dma_map_ops iommu_dma_ops = {
>   	.mmap			= iommu_dma_mmap,
>   	.get_sgtable		= iommu_dma_get_sgtable,
>   	.map_page		= iommu_dma_map_page,
> -	.unmap_page		= iommu_dma_unmap_page,
> +	.unmap_page_check	= iommu_dma_unmap_page_check,
>   	.map_sg			= iommu_dma_map_sg,
>   	.unmap_sg		= iommu_dma_unmap_sg,
>   	.sync_single_for_cpu	= iommu_dma_sync_single_for_cpu,
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 0d53a96a3d64..0ed0190f7949 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -69,6 +69,9 @@ struct dma_map_ops {
>   	u64 (*get_required_mask)(struct device *dev);
>   	size_t (*max_mapping_size)(struct device *dev);
>   	unsigned long (*get_merge_boundary)(struct device *dev);
> +	int (*unmap_page_check)(struct device *dev, dma_addr_t dma_handle,
> +			size_t size, enum dma_data_direction dir,
> +			unsigned long attrs);
>   };
>   
>   #ifdef CONFIG_DMA_OPS
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 37fbd12bd4ab..25b8382f8601 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -103,7 +103,7 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
>   dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
>   		size_t offset, size_t size, enum dma_data_direction dir,
>   		unsigned long attrs);
> -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
> +int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs);
>   int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   		enum dma_data_direction dir, unsigned long attrs);
> @@ -160,9 +160,10 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
>   {
>   	return DMA_MAPPING_ERROR;
>   }
> -static inline void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr,
> +static inline int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
> +	return 0;
>   }
>   static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
>   		int nents, enum dma_data_direction dir, unsigned long attrs)
> @@ -323,7 +324,7 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
>   			size, dir, attrs);
>   }
>   
> -static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
> +static inline int dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	return dma_unmap_page_attrs(dev, addr, size, dir, attrs);
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 9bf02c8d7d1b..dc0ce649d1f9 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -162,18 +162,22 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
>   }
>   EXPORT_SYMBOL(dma_map_page_attrs);
>   
> -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
> +int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs)
>   {
>   	const struct dma_map_ops *ops = get_dma_ops(dev);
> +	int ret = 0;
>   
>   	BUG_ON(!valid_dma_direction(dir));
>   	if (dma_map_direct(dev, ops) ||
>   	    arch_dma_unmap_page_direct(dev, addr + size))
>   		dma_direct_unmap_page(dev, addr, size, dir, attrs);
> +	else if (ops->unmap_page_check)
> +		ret = ops->unmap_page_check(dev, addr, size, dir, attrs);
>   	else if (ops->unmap_page)
>   		ops->unmap_page(dev, addr, size, dir, attrs);
>   	debug_dma_unmap_page(dev, addr, size, dir);
> +	return ret;
>   }
>   EXPORT_SYMBOL(dma_unmap_page_attrs);
>   
> 

WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com>
To: Andi Kleen <ak@linux.intel.com>, mst@redhat.com
Cc: jasowang@redhat.com, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	iommu@lists.linux-foundation.org, jpoimboe@redhat.com,
	hch@lst.de
Subject: Re: [PATCH v1 6/8] dma: Add return value to dma_unmap_page
Date: Thu, 3 Jun 2021 10:08:29 +0100	[thread overview]
Message-ID: <c3b15bc2-104b-dace-1f23-608197b18107@arm.com> (raw)
In-Reply-To: <20210603004133.4079390-7-ak@linux.intel.com>

Hi Andi,

On 2021-06-03 01:41, Andi Kleen wrote:
> In some situations when we know swiotlb is forced and we have
> to deal with untrusted hosts, it's useful to know if a mapping
> was in the swiotlb or not. This allows us to abort any IO
> operation that would access memory outside the swiotlb.
> 
> Otherwise it might be possible for a malicious host to inject
> any guest page in a read operation. While it couldn't directly
> access the results of the read() inside the guest, there
> might scenarios where data is echoed back with a write(),
> and that would then leak guest memory.
> 
> Add a return value to dma_unmap_single/page. Most users
> of course will ignore it. The return value is set to EIO
> if we're in forced swiotlb mode and the buffer is not inside
> the swiotlb buffer. Otherwise it's always 0.

I have to say my first impression of this isn't too good :(

What it looks like to me is abusing SWIOTLB's internal housekeeping to 
keep track of virtio-specific state. The DMA API does not attempt to 
validate calls in general since in many cases the additional overhead 
would be prohibitive. It has always been callers' responsibility to keep 
track of what they mapped and make sure sync/unmap calls match, and 
there are many, many, subtle and not-so-subtle ways for things to go 
wrong if they don't. If virtio is not doing a good enough job of that, 
what's the justification for making it the DMA API's problem?

> A new callback is used to avoid changing all the IOMMU drivers.

Nit: presumably by "IOMMU drivers" you actually mean arch DMA API backends?

As an aside, we'll take a look at the rest of the series for the 
perspective of our prototyping for Arm's Confidential Compute 
Architecture, but I'm not sure we'll need it, since accesses beyond the 
bounds of the shared SWIOTLB buffer shouldn't be an issue for us. 
Furthermore, AFAICS it's still not going to help against exfiltrating 
guest memory by over-unmapping the original SWIOTLB slot *without* going 
past the end of the whole buffer, but I think Martin's patch *has* 
addressed that already.

Robin.

> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>   drivers/iommu/dma-iommu.c   | 17 +++++++++++------
>   include/linux/dma-map-ops.h |  3 +++
>   include/linux/dma-mapping.h |  7 ++++---
>   kernel/dma/mapping.c        |  6 +++++-
>   4 files changed, 23 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 7ef13198721b..babe46f2ae3a 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -491,7 +491,8 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
>   	iommu_dma_free_iova(cookie, dma_addr, size, iotlb_gather.freelist);
>   }
>   
> -static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
> +static int __iommu_dma_unmap_swiotlb_check(struct device *dev,
> +		dma_addr_t dma_addr,
>   		size_t size, enum dma_data_direction dir,
>   		unsigned long attrs)
>   {
> @@ -500,12 +501,15 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
>   
>   	phys = iommu_iova_to_phys(domain, dma_addr);
>   	if (WARN_ON(!phys))
> -		return;
> +		return -EIO;
>   
>   	__iommu_dma_unmap(dev, dma_addr, size);
>   
>   	if (unlikely(is_swiotlb_buffer(phys, size)))
>   		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> +	else if (swiotlb_force == SWIOTLB_FORCE)
> +		return -EIO;
> +	return 0;
>   }
>   
>   static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> @@ -856,12 +860,13 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
>   	return dma_handle;
>   }
>   
> -static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
> +static int iommu_dma_unmap_page_check(struct device *dev, dma_addr_t dma_handle,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>   		iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir);
> -	__iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs);
> +	return __iommu_dma_unmap_swiotlb_check(dev, dma_handle, size, dir,
> +					       attrs);
>   }
>   
>   /*
> @@ -946,7 +951,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device *dev, struct scatterlist *s
>   	int i;
>   
>   	for_each_sg(sg, s, nents, i)
> -		__iommu_dma_unmap_swiotlb(dev, sg_dma_address(s),
> +		__iommu_dma_unmap_swiotlb_check(dev, sg_dma_address(s),
>   				sg_dma_len(s), dir, attrs);
>   }
>   
> @@ -1291,7 +1296,7 @@ static const struct dma_map_ops iommu_dma_ops = {
>   	.mmap			= iommu_dma_mmap,
>   	.get_sgtable		= iommu_dma_get_sgtable,
>   	.map_page		= iommu_dma_map_page,
> -	.unmap_page		= iommu_dma_unmap_page,
> +	.unmap_page_check	= iommu_dma_unmap_page_check,
>   	.map_sg			= iommu_dma_map_sg,
>   	.unmap_sg		= iommu_dma_unmap_sg,
>   	.sync_single_for_cpu	= iommu_dma_sync_single_for_cpu,
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 0d53a96a3d64..0ed0190f7949 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -69,6 +69,9 @@ struct dma_map_ops {
>   	u64 (*get_required_mask)(struct device *dev);
>   	size_t (*max_mapping_size)(struct device *dev);
>   	unsigned long (*get_merge_boundary)(struct device *dev);
> +	int (*unmap_page_check)(struct device *dev, dma_addr_t dma_handle,
> +			size_t size, enum dma_data_direction dir,
> +			unsigned long attrs);
>   };
>   
>   #ifdef CONFIG_DMA_OPS
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 37fbd12bd4ab..25b8382f8601 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -103,7 +103,7 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
>   dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
>   		size_t offset, size_t size, enum dma_data_direction dir,
>   		unsigned long attrs);
> -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
> +int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs);
>   int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   		enum dma_data_direction dir, unsigned long attrs);
> @@ -160,9 +160,10 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
>   {
>   	return DMA_MAPPING_ERROR;
>   }
> -static inline void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr,
> +static inline int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
> +	return 0;
>   }
>   static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
>   		int nents, enum dma_data_direction dir, unsigned long attrs)
> @@ -323,7 +324,7 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
>   			size, dir, attrs);
>   }
>   
> -static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
> +static inline int dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	return dma_unmap_page_attrs(dev, addr, size, dir, attrs);
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 9bf02c8d7d1b..dc0ce649d1f9 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -162,18 +162,22 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
>   }
>   EXPORT_SYMBOL(dma_map_page_attrs);
>   
> -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
> +int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs)
>   {
>   	const struct dma_map_ops *ops = get_dma_ops(dev);
> +	int ret = 0;
>   
>   	BUG_ON(!valid_dma_direction(dir));
>   	if (dma_map_direct(dev, ops) ||
>   	    arch_dma_unmap_page_direct(dev, addr + size))
>   		dma_direct_unmap_page(dev, addr, size, dir, attrs);
> +	else if (ops->unmap_page_check)
> +		ret = ops->unmap_page_check(dev, addr, size, dir, attrs);
>   	else if (ops->unmap_page)
>   		ops->unmap_page(dev, addr, size, dir, attrs);
>   	debug_dma_unmap_page(dev, addr, size, dir);
> +	return ret;
>   }
>   EXPORT_SYMBOL(dma_unmap_page_attrs);
>   
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com>
To: Andi Kleen <ak@linux.intel.com>, mst@redhat.com
Cc: sathyanarayanan.kuppuswamy@linux.intel.com, x86@kernel.org,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	iommu@lists.linux-foundation.org, jpoimboe@redhat.com,
	hch@lst.de, m.szyprowski@samsung.com
Subject: Re: [PATCH v1 6/8] dma: Add return value to dma_unmap_page
Date: Thu, 3 Jun 2021 10:08:29 +0100	[thread overview]
Message-ID: <c3b15bc2-104b-dace-1f23-608197b18107@arm.com> (raw)
In-Reply-To: <20210603004133.4079390-7-ak@linux.intel.com>

Hi Andi,

On 2021-06-03 01:41, Andi Kleen wrote:
> In some situations when we know swiotlb is forced and we have
> to deal with untrusted hosts, it's useful to know if a mapping
> was in the swiotlb or not. This allows us to abort any IO
> operation that would access memory outside the swiotlb.
> 
> Otherwise it might be possible for a malicious host to inject
> any guest page in a read operation. While it couldn't directly
> access the results of the read() inside the guest, there
> might scenarios where data is echoed back with a write(),
> and that would then leak guest memory.
> 
> Add a return value to dma_unmap_single/page. Most users
> of course will ignore it. The return value is set to EIO
> if we're in forced swiotlb mode and the buffer is not inside
> the swiotlb buffer. Otherwise it's always 0.

I have to say my first impression of this isn't too good :(

What it looks like to me is abusing SWIOTLB's internal housekeeping to 
keep track of virtio-specific state. The DMA API does not attempt to 
validate calls in general since in many cases the additional overhead 
would be prohibitive. It has always been callers' responsibility to keep 
track of what they mapped and make sure sync/unmap calls match, and 
there are many, many, subtle and not-so-subtle ways for things to go 
wrong if they don't. If virtio is not doing a good enough job of that, 
what's the justification for making it the DMA API's problem?

> A new callback is used to avoid changing all the IOMMU drivers.

Nit: presumably by "IOMMU drivers" you actually mean arch DMA API backends?

As an aside, we'll take a look at the rest of the series for the 
perspective of our prototyping for Arm's Confidential Compute 
Architecture, but I'm not sure we'll need it, since accesses beyond the 
bounds of the shared SWIOTLB buffer shouldn't be an issue for us. 
Furthermore, AFAICS it's still not going to help against exfiltrating 
guest memory by over-unmapping the original SWIOTLB slot *without* going 
past the end of the whole buffer, but I think Martin's patch *has* 
addressed that already.

Robin.

> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>   drivers/iommu/dma-iommu.c   | 17 +++++++++++------
>   include/linux/dma-map-ops.h |  3 +++
>   include/linux/dma-mapping.h |  7 ++++---
>   kernel/dma/mapping.c        |  6 +++++-
>   4 files changed, 23 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 7ef13198721b..babe46f2ae3a 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -491,7 +491,8 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
>   	iommu_dma_free_iova(cookie, dma_addr, size, iotlb_gather.freelist);
>   }
>   
> -static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
> +static int __iommu_dma_unmap_swiotlb_check(struct device *dev,
> +		dma_addr_t dma_addr,
>   		size_t size, enum dma_data_direction dir,
>   		unsigned long attrs)
>   {
> @@ -500,12 +501,15 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
>   
>   	phys = iommu_iova_to_phys(domain, dma_addr);
>   	if (WARN_ON(!phys))
> -		return;
> +		return -EIO;
>   
>   	__iommu_dma_unmap(dev, dma_addr, size);
>   
>   	if (unlikely(is_swiotlb_buffer(phys, size)))
>   		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> +	else if (swiotlb_force == SWIOTLB_FORCE)
> +		return -EIO;
> +	return 0;
>   }
>   
>   static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> @@ -856,12 +860,13 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
>   	return dma_handle;
>   }
>   
> -static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
> +static int iommu_dma_unmap_page_check(struct device *dev, dma_addr_t dma_handle,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>   		iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir);
> -	__iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs);
> +	return __iommu_dma_unmap_swiotlb_check(dev, dma_handle, size, dir,
> +					       attrs);
>   }
>   
>   /*
> @@ -946,7 +951,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device *dev, struct scatterlist *s
>   	int i;
>   
>   	for_each_sg(sg, s, nents, i)
> -		__iommu_dma_unmap_swiotlb(dev, sg_dma_address(s),
> +		__iommu_dma_unmap_swiotlb_check(dev, sg_dma_address(s),
>   				sg_dma_len(s), dir, attrs);
>   }
>   
> @@ -1291,7 +1296,7 @@ static const struct dma_map_ops iommu_dma_ops = {
>   	.mmap			= iommu_dma_mmap,
>   	.get_sgtable		= iommu_dma_get_sgtable,
>   	.map_page		= iommu_dma_map_page,
> -	.unmap_page		= iommu_dma_unmap_page,
> +	.unmap_page_check	= iommu_dma_unmap_page_check,
>   	.map_sg			= iommu_dma_map_sg,
>   	.unmap_sg		= iommu_dma_unmap_sg,
>   	.sync_single_for_cpu	= iommu_dma_sync_single_for_cpu,
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 0d53a96a3d64..0ed0190f7949 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -69,6 +69,9 @@ struct dma_map_ops {
>   	u64 (*get_required_mask)(struct device *dev);
>   	size_t (*max_mapping_size)(struct device *dev);
>   	unsigned long (*get_merge_boundary)(struct device *dev);
> +	int (*unmap_page_check)(struct device *dev, dma_addr_t dma_handle,
> +			size_t size, enum dma_data_direction dir,
> +			unsigned long attrs);
>   };
>   
>   #ifdef CONFIG_DMA_OPS
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 37fbd12bd4ab..25b8382f8601 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -103,7 +103,7 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
>   dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
>   		size_t offset, size_t size, enum dma_data_direction dir,
>   		unsigned long attrs);
> -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
> +int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs);
>   int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents,
>   		enum dma_data_direction dir, unsigned long attrs);
> @@ -160,9 +160,10 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
>   {
>   	return DMA_MAPPING_ERROR;
>   }
> -static inline void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr,
> +static inline int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
> +	return 0;
>   }
>   static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
>   		int nents, enum dma_data_direction dir, unsigned long attrs)
> @@ -323,7 +324,7 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
>   			size, dir, attrs);
>   }
>   
> -static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
> +static inline int dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
>   		size_t size, enum dma_data_direction dir, unsigned long attrs)
>   {
>   	return dma_unmap_page_attrs(dev, addr, size, dir, attrs);
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 9bf02c8d7d1b..dc0ce649d1f9 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -162,18 +162,22 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page,
>   }
>   EXPORT_SYMBOL(dma_map_page_attrs);
>   
> -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
> +int dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size,
>   		enum dma_data_direction dir, unsigned long attrs)
>   {
>   	const struct dma_map_ops *ops = get_dma_ops(dev);
> +	int ret = 0;
>   
>   	BUG_ON(!valid_dma_direction(dir));
>   	if (dma_map_direct(dev, ops) ||
>   	    arch_dma_unmap_page_direct(dev, addr + size))
>   		dma_direct_unmap_page(dev, addr, size, dir, attrs);
> +	else if (ops->unmap_page_check)
> +		ret = ops->unmap_page_check(dev, addr, size, dir, attrs);
>   	else if (ops->unmap_page)
>   		ops->unmap_page(dev, addr, size, dir, attrs);
>   	debug_dma_unmap_page(dev, addr, size, dir);
> +	return ret;
>   }
>   EXPORT_SYMBOL(dma_unmap_page_attrs);
>   
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2021-06-03  9:08 UTC|newest]

Thread overview: 116+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-03  0:41 Virtio hardening for TDX Andi Kleen
2021-06-03  0:41 ` Andi Kleen
2021-06-03  0:41 ` Andi Kleen
2021-06-03  0:41 ` [PATCH v1 1/8] virtio: Force only split mode with protected guest Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  1:36   ` Jason Wang
2021-06-03  1:36     ` Jason Wang
2021-06-03  1:36     ` Jason Wang
2021-06-03  1:48     ` Andi Kleen
2021-06-03  1:48       ` Andi Kleen
2021-06-03  1:48       ` Andi Kleen
2021-06-03  2:32       ` Jason Wang
2021-06-03  2:32         ` Jason Wang
2021-06-03  2:32         ` Jason Wang
2021-06-03  2:56         ` Andi Kleen
2021-06-03  2:56           ` Andi Kleen
2021-06-03  2:56           ` Andi Kleen
2021-06-03  3:02           ` Jason Wang
2021-06-03  3:02             ` Jason Wang
2021-06-03  3:02             ` Jason Wang
2021-06-03 13:55             ` Andi Kleen
2021-06-03 13:55               ` Andi Kleen
2021-06-03 13:55               ` Andi Kleen
2021-06-04  2:29               ` Jason Wang
2021-06-04  2:29                 ` Jason Wang
2021-06-04  2:29                 ` Jason Wang
2021-06-03 17:33   ` Andy Lutomirski
2021-06-03 17:33     ` Andy Lutomirski
2021-06-03 17:33     ` Andy Lutomirski
2021-06-03 18:00     ` Andi Kleen
2021-06-03 18:00       ` Andi Kleen
2021-06-03 18:00       ` Andi Kleen
2021-06-03 19:31       ` Andy Lutomirski
2021-06-03 19:31         ` Andy Lutomirski
2021-06-03 19:31         ` Andy Lutomirski
2021-06-03 19:53         ` Andi Kleen
2021-06-03 19:53           ` Andi Kleen
2021-06-03 19:53           ` Andi Kleen
2021-06-03 22:17           ` Andy Lutomirski
2021-06-03 22:17             ` Andy Lutomirski
2021-06-03 22:17             ` Andy Lutomirski
2021-06-03 23:32             ` Andi Kleen
2021-06-03 23:32               ` Andi Kleen
2021-06-03 23:32               ` Andi Kleen
2021-06-04  1:46               ` Andy Lutomirski
2021-06-04  1:46                 ` Andy Lutomirski
2021-06-04  1:46                 ` Andy Lutomirski
2021-06-04  1:54                 ` Andi Kleen
2021-06-04  1:54                   ` Andi Kleen
2021-06-04  1:54                   ` Andi Kleen
2021-06-04  1:22         ` Jason Wang
2021-06-04  1:22           ` Jason Wang
2021-06-04  1:22           ` Jason Wang
2021-06-04  1:29       ` Jason Wang
2021-06-04  1:29         ` Jason Wang
2021-06-04  1:29         ` Jason Wang
2021-06-04  2:20     ` Jason Wang
2021-06-04  2:20       ` Jason Wang
2021-06-04  2:20       ` Jason Wang
2021-06-03  0:41 ` [PATCH v1 2/8] virtio: Add boundary checks to virtio ring Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  2:14   ` Jason Wang
2021-06-03  2:14     ` Jason Wang
2021-06-03  2:14     ` Jason Wang
2021-06-03  2:18     ` Andi Kleen
2021-06-03  2:18       ` Andi Kleen
2021-06-03  2:18       ` Andi Kleen
2021-06-03  2:36       ` Jason Wang
2021-06-03  2:36         ` Jason Wang
2021-06-03  2:36         ` Jason Wang
2021-06-03  0:41 ` [PATCH v1 3/8] virtio: Harden split buffer detachment Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  2:29   ` Jason Wang
2021-06-03  2:29     ` Jason Wang
2021-06-03  2:29     ` Jason Wang
2021-06-03  0:41 ` [PATCH v1 4/8] x86/tdx: Add arch_has_restricted_memory_access for TDX Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  4:02   ` Kuppuswamy, Sathyanarayanan
2021-06-03  4:02     ` Kuppuswamy, Sathyanarayanan
2021-06-03  0:41 ` [PATCH v1 5/8] dma: Use size for swiotlb boundary checks Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  1:48   ` Konrad Rzeszutek Wilk
2021-06-03  1:48     ` Konrad Rzeszutek Wilk
2021-06-03  1:48     ` Konrad Rzeszutek Wilk
2021-06-03  2:03     ` Andi Kleen
2021-06-03  2:03       ` Andi Kleen
2021-06-03  2:03       ` Andi Kleen
2021-06-03  9:09   ` Robin Murphy
2021-06-03  9:09     ` Robin Murphy
2021-06-03  9:09     ` Robin Murphy
2021-06-03  0:41 ` [PATCH v1 6/8] dma: Add return value to dma_unmap_page Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  9:08   ` Robin Murphy [this message]
2021-06-03  9:08     ` Robin Murphy
2021-06-03  9:08     ` Robin Murphy
2021-06-03 12:36     ` Andi Kleen
2021-06-03 12:36       ` Andi Kleen
2021-06-03 12:36       ` Andi Kleen
2021-06-03  0:41 ` [PATCH v1 7/8] virtio: Abort IO when descriptor points outside forced swiotlb Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41 ` [PATCH v1 8/8] virtio: Error out on endless free lists Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  0:41   ` Andi Kleen
2021-06-03  1:34 ` Virtio hardening for TDX Jason Wang
2021-06-03  1:34   ` Jason Wang
2021-06-03  1:34   ` Jason Wang
2021-06-03  1:56   ` Andi Kleen
2021-06-03  1:56     ` Andi Kleen
2021-06-03  1:56     ` Andi Kleen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c3b15bc2-104b-dace-1f23-608197b18107@arm.com \
    --to=robin.murphy@arm.com \
    --cc=ak@linux.intel.com \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jasowang@redhat.com \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mst@redhat.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.