All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org, bskeggs@redhat.com, bhelgaas@google.com,
	chris@chris-wilson.co.uk, daniel@ffwll.ch, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, jxgao@google.com,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Mon, 14 Jun 2021 08:25:30 +0200	[thread overview]
Message-ID: <20210614062530.GG28343@lst.de> (raw)
In-Reply-To: <20210611152659.2142983-8-tientzu@chromium.org>

On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote:
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Note that is_dev_swiotlb_force doesn't check if
> swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
> with default swiotlb will be changed by the following patche
> ("dma-direct: Allocate memory from restricted DMA pool if available").
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/swiotlb.h | 10 +++++++++-
>  kernel/dma/direct.c     |  3 ++-
>  kernel/dma/direct.h     |  3 ++-
>  kernel/dma/swiotlb.c    |  1 +
>  4 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 06cf17a80f5c..8200c100fe10 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_swiotlb: %true if swiotlb is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -95,6 +96,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_swiotlb;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
>  	dev->dma_io_tlb_mem = io_tlb_default_mem;
>  }
>  
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_swiotlb;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +static inline bool is_dev_swiotlb_force(struct device *dev)
>  {
> +	return false;
>  }
>  static inline void swiotlb_exit(void)
>  {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..078f7087e466 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> +	     is_dev_swiotlb_force(dev)))

I think we can remove the extra swiotlb_force check here if the
swiotlb_force setting is propagated into io_tlb_default_mem->force
when that is initialized. This avoids an extra check in the fast path.

> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
> +	    is_dev_swiotlb_force(dev))

Same here.

WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com,
	thomas.hellstrom@linux.intel.com, peterz@infradead.org,
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	Frank Rowand <frowand.list@gmail.com>,
	mingo@kernel.org, Marek Szyprowski <m.szyprowski@samsung.com>,
	sstabellini@kernel.org, Saravana Kannan <saravanak@google.com>,
	Joerg Roedel <joro@8bytes.org>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	bskeggs@redhat.com, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>,
	jxgao@google.com, daniel@ffwll.ch, Will Deacon <will@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	maarten.lankhorst@linux.intel.com, airlied@linux.ie,
	Dan Williams <dan.j.williams@intel.com>,
	linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
	Rob Herring <robh+dt@kernel.org>,
	rodrigo.vivi@intel.com, bhelgaas@google.com,
	boris.ostrovsky@oracle.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	tfiga@chromium.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>,
	bauerman@linux.ibm.com
Subject: Re: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Mon, 14 Jun 2021 08:25:30 +0200	[thread overview]
Message-ID: <20210614062530.GG28343@lst.de> (raw)
In-Reply-To: <20210611152659.2142983-8-tientzu@chromium.org>

On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote:
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Note that is_dev_swiotlb_force doesn't check if
> swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
> with default swiotlb will be changed by the following patche
> ("dma-direct: Allocate memory from restricted DMA pool if available").
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/swiotlb.h | 10 +++++++++-
>  kernel/dma/direct.c     |  3 ++-
>  kernel/dma/direct.h     |  3 ++-
>  kernel/dma/swiotlb.c    |  1 +
>  4 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 06cf17a80f5c..8200c100fe10 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_swiotlb: %true if swiotlb is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -95,6 +96,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_swiotlb;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
>  	dev->dma_io_tlb_mem = io_tlb_default_mem;
>  }
>  
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_swiotlb;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +static inline bool is_dev_swiotlb_force(struct device *dev)
>  {
> +	return false;
>  }
>  static inline void swiotlb_exit(void)
>  {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..078f7087e466 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> +	     is_dev_swiotlb_force(dev)))

I think we can remove the extra swiotlb_force check here if the
swiotlb_force setting is propagated into io_tlb_default_mem->force
when that is initialized. This avoids an extra check in the fast path.

> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
> +	    is_dev_swiotlb_force(dev))

Same here.

WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com,
	thomas.hellstrom@linux.intel.com, peterz@infradead.org,
	benh@kernel.crashing.org, joonas.lahtinen@linux.intel.com,
	dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
	grant.likely@arm.com, paulus@samba.org,
	Frank Rowand <frowand.list@gmail.com>,
	mingo@kernel.org, sstabellini@kernel.org,
	Saravana Kannan <saravanak@google.com>,
	mpe@ellerman.id.au,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	bskeggs@redhat.com, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>,
	jxgao@google.com, daniel@ffwll.ch, Will Deacon <will@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	maarten.lankhorst@linux.intel.com, airlied@linux.ie,
	Dan Williams <dan.j.williams@intel.com>,
	linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
	Rob Herring <robh+dt@kernel.org>,
	rodrigo.vivi@intel.com, bhelgaas@google.com,
	boris.ostrovsky@oracle.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>,
	bauerman@linux.ibm.com
Subject: Re: [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Mon, 14 Jun 2021 08:25:30 +0200	[thread overview]
Message-ID: <20210614062530.GG28343@lst.de> (raw)
In-Reply-To: <20210611152659.2142983-8-tientzu@chromium.org>

On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote:
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Note that is_dev_swiotlb_force doesn't check if
> swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
> with default swiotlb will be changed by the following patche
> ("dma-direct: Allocate memory from restricted DMA pool if available").
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/swiotlb.h | 10 +++++++++-
>  kernel/dma/direct.c     |  3 ++-
>  kernel/dma/direct.h     |  3 ++-
>  kernel/dma/swiotlb.c    |  1 +
>  4 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 06cf17a80f5c..8200c100fe10 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_swiotlb: %true if swiotlb is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -95,6 +96,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_swiotlb;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
>  	dev->dma_io_tlb_mem = io_tlb_default_mem;
>  }
>  
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_swiotlb;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +static inline bool is_dev_swiotlb_force(struct device *dev)
>  {
> +	return false;
>  }
>  static inline void swiotlb_exit(void)
>  {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..078f7087e466 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> +	     is_dev_swiotlb_force(dev)))

I think we can remove the extra swiotlb_force check here if the
swiotlb_force setting is propagated into io_tlb_default_mem->force
when that is initialized. This avoids an extra check in the fast path.

> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
> +	    is_dev_swiotlb_force(dev))

Same here.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com,
	thomas.hellstrom@linux.intel.com, peterz@infradead.org,
	benh@kernel.crashing.org, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	Frank Rowand <frowand.list@gmail.com>,
	mingo@kernel.org, Marek Szyprowski <m.szyprowski@samsung.com>,
	sstabellini@kernel.org, Saravana Kannan <saravanak@google.com>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	bskeggs@redhat.com, linux-pci@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>,
	jxgao@google.com, Will Deacon <will@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	airlied@linux.ie, Dan Williams <dan.j.williams@intel.com>,
	linuxppc-dev@lists.ozlabs.org, Rob Herring <robh+dt@kernel.org>,
	bhelgaas@google.com, boris.ostrovsky@oracle.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	tfiga@chromium.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>,
	bauerman@linux.ibm.com
Subject: Re: [Intel-gfx] [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Mon, 14 Jun 2021 08:25:30 +0200	[thread overview]
Message-ID: <20210614062530.GG28343@lst.de> (raw)
In-Reply-To: <20210611152659.2142983-8-tientzu@chromium.org>

On Fri, Jun 11, 2021 at 11:26:52PM +0800, Claire Chang wrote:
> Regardless of swiotlb setting, the restricted DMA pool is preferred if
> available.
> 
> The restricted DMA pools provide a basic level of protection against the
> DMA overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system
> needs to provide a way to lock down the memory access, e.g., MPU.
> 
> Note that is_dev_swiotlb_force doesn't check if
> swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
> with default swiotlb will be changed by the following patche
> ("dma-direct: Allocate memory from restricted DMA pool if available").
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/swiotlb.h | 10 +++++++++-
>  kernel/dma/direct.c     |  3 ++-
>  kernel/dma/direct.h     |  3 ++-
>  kernel/dma/swiotlb.c    |  1 +
>  4 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 06cf17a80f5c..8200c100fe10 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -85,6 +85,7 @@ extern enum swiotlb_force swiotlb_force;
>   *		unmap calls.
>   * @debugfs:	The dentry to debugfs.
>   * @late_alloc:	%true if allocated using the page allocator
> + * @force_swiotlb: %true if swiotlb is forced
>   */
>  struct io_tlb_mem {
>  	phys_addr_t start;
> @@ -95,6 +96,7 @@ struct io_tlb_mem {
>  	spinlock_t lock;
>  	struct dentry *debugfs;
>  	bool late_alloc;
> +	bool force_swiotlb;
>  	struct io_tlb_slot {
>  		phys_addr_t orig_addr;
>  		size_t alloc_size;
> @@ -115,6 +117,11 @@ static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
>  	dev->dma_io_tlb_mem = io_tlb_default_mem;
>  }
>  
> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> +	return dev->dma_io_tlb_mem->force_swiotlb;
> +}
> +
>  void __init swiotlb_exit(void);
>  unsigned int swiotlb_max_segment(void);
>  size_t swiotlb_max_mapping_size(struct device *dev);
> @@ -126,8 +133,9 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	return false;
>  }
> -static inline void swiotlb_set_io_tlb_default_mem(struct device *dev)
> +static inline bool is_dev_swiotlb_force(struct device *dev)
>  {
> +	return false;
>  }
>  static inline void swiotlb_exit(void)
>  {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 7a88c34d0867..078f7087e466 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
>  {
>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> +	     is_dev_swiotlb_force(dev)))

I think we can remove the extra swiotlb_force check here if the
swiotlb_force setting is propagated into io_tlb_default_mem->force
when that is initialized. This avoids an extra check in the fast path.

> -	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
> +	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
> +	    is_dev_swiotlb_force(dev))

Same here.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-06-14  6:25 UTC|newest]

Thread overview: 144+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-11 15:26 [PATCH v9 00/14] Restricted DMA Claire Chang
2021-06-11 15:26 ` [Intel-gfx] " Claire Chang
2021-06-11 15:26 ` Claire Chang
2021-06-11 15:26 ` Claire Chang
2021-06-11 15:26 ` Claire Chang
2021-06-11 15:26 ` [PATCH v9 01/14] swiotlb: Refactor swiotlb init functions Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:16   ` Christoph Hellwig
2021-06-14  6:16     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:16     ` Christoph Hellwig
2021-06-14  6:16     ` Christoph Hellwig
2021-06-15  4:06     ` Claire Chang
2021-06-15  4:06       ` Claire Chang
2021-06-15  4:06       ` [Intel-gfx] " Claire Chang
2021-06-15  4:06       ` Claire Chang
2021-06-15  4:06       ` Claire Chang
2021-06-15  4:06       ` Claire Chang
2021-06-11 15:26 ` [PATCH v9 02/14] swiotlb: Refactor swiotlb_create_debugfs Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:17   ` Christoph Hellwig
2021-06-14  6:17     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:17     ` Christoph Hellwig
2021-06-14  6:17     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 03/14] swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:33   ` Claire Chang
2021-06-11 15:33     ` Claire Chang
2021-06-11 15:33     ` [Intel-gfx] " Claire Chang
2021-06-11 15:33     ` Claire Chang
2021-06-11 15:33     ` Claire Chang
2021-06-11 15:33     ` Claire Chang
2021-06-14  6:20     ` Christoph Hellwig
2021-06-14  6:20       ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:20       ` Christoph Hellwig
2021-06-14  6:20       ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 04/14] swiotlb: Add restricted DMA pool initialization Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:21   ` Christoph Hellwig
2021-06-14  6:21     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:21     ` Christoph Hellwig
2021-06-14  6:21     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 05/14] swiotlb: Update is_swiotlb_buffer to add a struct device argument Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:21   ` Christoph Hellwig
2021-06-14  6:21     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:21     ` Christoph Hellwig
2021-06-14  6:21     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 06/14] swiotlb: Update is_swiotlb_active " Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:34   ` Claire Chang
2021-06-11 15:34     ` Claire Chang
2021-06-11 15:34     ` [Intel-gfx] " Claire Chang
2021-06-11 15:34     ` Claire Chang
2021-06-11 15:34     ` Claire Chang
2021-06-11 15:34     ` Claire Chang
2021-06-14  6:23   ` Christoph Hellwig
2021-06-14  6:23     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:23     ` Christoph Hellwig
2021-06-14  6:23     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 07/14] swiotlb: Bounce data from/to restricted DMA pool if available Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:25   ` Christoph Hellwig [this message]
2021-06-14  6:25     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:25     ` Christoph Hellwig
2021-06-14  6:25     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 08/14] swiotlb: Move alloc_size to find_slots Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:25   ` Christoph Hellwig
2021-06-14  6:25     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:25     ` Christoph Hellwig
2021-06-14  6:25     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 09/14] swiotlb: Refactor swiotlb_tbl_unmap_single Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:26   ` Christoph Hellwig
2021-06-14  6:26     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:26     ` Christoph Hellwig
2021-06-14  6:26     ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 10/14] dma-direct: Add a new wrapper __dma_direct_free_pages() Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26 ` [PATCH v9 11/14] swiotlb: Add restricted DMA alloc/free support Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-14  6:28   ` Christoph Hellwig
2021-06-14  6:28     ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:28     ` Christoph Hellwig
2021-06-14  6:28     ` Christoph Hellwig
2021-06-14  6:28     ` Christoph Hellwig
2021-06-14  6:28       ` [Intel-gfx] " Christoph Hellwig
2021-06-14  6:28       ` Christoph Hellwig
2021-06-14  6:28       ` Christoph Hellwig
2021-06-11 15:26 ` [PATCH v9 12/14] dma-direct: Allocate memory from restricted DMA pool if available Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26 ` [PATCH v9 13/14] dt-bindings: of: Add restricted DMA pool Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26 ` [PATCH v9 14/14] of: Add plumbing for " Claire Chang
2021-06-11 15:26   ` [Intel-gfx] " Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 15:26   ` Claire Chang
2021-06-11 17:37 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for Restricted DMA Patchwork
2021-06-15 13:30 ` [PATCH v9 00/14] " Claire Chang
2021-06-15 13:30   ` Claire Chang
2021-06-15 13:30   ` [Intel-gfx] " Claire Chang
2021-06-15 13:30   ` Claire Chang
2021-06-15 13:30   ` Claire Chang
2021-06-15 13:30   ` Claire Chang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210614062530.GG28343@lst.de \
    --to=hch@lst.de \
    --cc=airlied@linux.ie \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=bauerman@linux.ibm.com \
    --cc=benh@kernel.crashing.org \
    --cc=bgolaszewski@baylibre.com \
    --cc=bhelgaas@google.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bskeggs@redhat.com \
    --cc=chris@chris-wilson.co.uk \
    --cc=dan.j.williams@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=devicetree@vger.kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=drinkcat@chromium.org \
    --cc=frowand.list@gmail.com \
    --cc=grant.likely@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=heikki.krogerus@linux.intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=james.quinlan@broadcom.com \
    --cc=jani.nikula@linux.intel.com \
    --cc=jgross@suse.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=joro@8bytes.org \
    --cc=jxgao@google.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=m.szyprowski@samsung.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=matthew.auld@intel.com \
    --cc=mingo@kernel.org \
    --cc=mpe@ellerman.id.au \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=rdunlap@infradead.org \
    --cc=robh+dt@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=saravanak@google.com \
    --cc=sstabellini@kernel.org \
    --cc=tfiga@chromium.org \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=tientzu@chromium.org \
    --cc=treding@nvidia.com \
    --cc=will@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    --cc=xypron.glpk@gmx.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.