dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* Re: cleanup unused or almost unused IOMMU APIs and the FSL PAMU driver
       [not found] <20210301084257.945454-1-hch@lst.de>
@ 2021-03-04 11:10 ` Joerg Roedel
       [not found] ` <20210301084257.945454-15-hch@lst.de>
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Joerg Roedel @ 2021-03-04 11:10 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: freedreno, kvm, Will Deacon, Michael Ellerman, dri-devel,
	Li Yang, iommu, netdev, linux-arm-msm, virtualization,
	linuxppc-dev, David Woodhouse, linux-arm-kernel, Lu Baolu

On Mon, Mar 01, 2021 at 09:42:40AM +0100, Christoph Hellwig wrote:
> Diffstat:
>  arch/powerpc/include/asm/fsl_pamu_stash.h   |   12 
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c     |    2 
>  drivers/iommu/amd/iommu.c                   |   23 
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c |   85 ---
>  drivers/iommu/arm/arm-smmu/arm-smmu.c       |  122 +---
>  drivers/iommu/dma-iommu.c                   |    8 
>  drivers/iommu/fsl_pamu.c                    |  264 ----------
>  drivers/iommu/fsl_pamu.h                    |   10 
>  drivers/iommu/fsl_pamu_domain.c             |  694 ++--------------------------
>  drivers/iommu/fsl_pamu_domain.h             |   46 -
>  drivers/iommu/intel/iommu.c                 |   55 --
>  drivers/iommu/iommu.c                       |   75 ---
>  drivers/soc/fsl/qbman/qman_portal.c         |   56 --
>  drivers/vfio/vfio_iommu_type1.c             |   31 -
>  drivers/vhost/vdpa.c                        |   10 
>  include/linux/iommu.h                       |   81 ---
>  16 files changed, 214 insertions(+), 1360 deletions(-)

Nice cleanup, thanks. The fsl_pamu driver and interface has always been
a little bit of an alien compared to other IOMMU drivers. I am inclined
to merge this after -rc3 is out, given some reviews. Can you also please
add changelogs to the last three patches?

Thanks,

	Joerg
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 14/17] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE
       [not found] ` <20210301084257.945454-15-hch@lst.de>
@ 2021-03-04 15:25   ` Robin Murphy
       [not found]     ` <20210310091501.GC5928@lst.de>
  0 siblings, 1 reply; 13+ messages in thread
From: Robin Murphy @ 2021-03-04 15:25 UTC (permalink / raw)
  To: Christoph Hellwig, Joerg Roedel, Will Deacon, Li Yang
  Cc: kvm, Michael Ellerman, linux-arm-msm, linuxppc-dev, dri-devel,
	virtualization, iommu, netdev, freedreno, David Woodhouse,
	linux-arm-kernel

On 2021-03-01 08:42, Christoph Hellwig wrote:
> Use explicit methods for setting and querying the information instead.

Now that everyone's using iommu-dma, is there any point in bouncing this 
through the drivers at all? Seems like it would make more sense for the 
x86 drivers to reflect their private options back to iommu_dma_strict 
(and allow Intel's caching mode to override it as well), then have 
iommu_dma_init_domain just test !iommu_dma_strict && 
domain->ops->flush_iotlb_all.

Robin.

> Also remove the now unused iommu_domain_get_attr functionality.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/iommu/amd/iommu.c                   | 23 ++-------
>   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 47 ++++++-----------
>   drivers/iommu/arm/arm-smmu/arm-smmu.c       | 56 +++++----------------
>   drivers/iommu/dma-iommu.c                   |  8 ++-
>   drivers/iommu/intel/iommu.c                 | 27 ++--------
>   drivers/iommu/iommu.c                       | 19 +++----
>   include/linux/iommu.h                       | 17 ++-----
>   7 files changed, 51 insertions(+), 146 deletions(-)
> 
> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> index a69a8b573e40d0..37a8e51db17656 100644
> --- a/drivers/iommu/amd/iommu.c
> +++ b/drivers/iommu/amd/iommu.c
> @@ -1771,24 +1771,11 @@ static struct iommu_group *amd_iommu_device_group(struct device *dev)
>   	return acpihid_device_group(dev);
>   }
>   
> -static int amd_iommu_domain_get_attr(struct iommu_domain *domain,
> -		enum iommu_attr attr, void *data)
> +static bool amd_iommu_dma_use_flush_queue(struct iommu_domain *domain)
>   {
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		return -ENODEV;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			*(int *)data = !amd_iommu_unmap_flush;
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> +	if (domain->type != IOMMU_DOMAIN_DMA)
> +		return false;
> +	return !amd_iommu_unmap_flush;
>   }
>   
>   /*****************************************************************************
> @@ -2257,7 +2244,7 @@ const struct iommu_ops amd_iommu_ops = {
>   	.release_device = amd_iommu_release_device,
>   	.probe_finalize = amd_iommu_probe_finalize,
>   	.device_group = amd_iommu_device_group,
> -	.domain_get_attr = amd_iommu_domain_get_attr,
> +	.dma_use_flush_queue = amd_iommu_dma_use_flush_queue,
>   	.get_resv_regions = amd_iommu_get_resv_regions,
>   	.put_resv_regions = generic_iommu_put_resv_regions,
>   	.is_attach_deferred = amd_iommu_is_attach_deferred,
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index 8594b4a8304375..bf96172e8c1f71 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -2449,33 +2449,21 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
>   	return group;
>   }
>   
> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> +static bool arm_smmu_dma_use_flush_queue(struct iommu_domain *domain)
>   {
>   	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>   
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			*(int *)data = smmu_domain->non_strict;
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> +	if (domain->type != IOMMU_DOMAIN_DMA)
> +		return false;
> +	return smmu_domain->non_strict;
> +}
> +
> +
> +static void arm_smmu_dma_enable_flush_queue(struct iommu_domain *domain)
> +{
> +	if (domain->type != IOMMU_DOMAIN_DMA)
> +		return;
> +	to_smmu_domain(domain)->non_strict = true;
>   }
>   
>   static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> @@ -2505,13 +2493,7 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
>   		}
>   		break;
>   	case IOMMU_DOMAIN_DMA:
> -		switch(attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			smmu_domain->non_strict = *(int *)data;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> +		ret = -ENODEV;
>   		break;
>   	default:
>   		ret = -EINVAL;
> @@ -2619,7 +2601,8 @@ static struct iommu_ops arm_smmu_ops = {
>   	.probe_device		= arm_smmu_probe_device,
>   	.release_device		= arm_smmu_release_device,
>   	.device_group		= arm_smmu_device_group,
> -	.domain_get_attr	= arm_smmu_domain_get_attr,
> +	.dma_use_flush_queue	= arm_smmu_dma_use_flush_queue,
> +	.dma_enable_flush_queue	= arm_smmu_dma_enable_flush_queue,
>   	.domain_set_attr	= arm_smmu_domain_set_attr,
>   	.of_xlate		= arm_smmu_of_xlate,
>   	.get_resv_regions	= arm_smmu_get_resv_regions,
> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> index d8c6bfde6a6158..e7893e96f5177a 100644
> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> @@ -1481,42 +1481,20 @@ static struct iommu_group *arm_smmu_device_group(struct device *dev)
>   	return group;
>   }
>   
> -static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> +static bool arm_smmu_dma_use_flush_queue(struct iommu_domain *domain)
>   {
>   	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>   
> -	switch(domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			*(int *)data = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED);
> -			return 0;
> -		case DOMAIN_ATTR_IO_PGTABLE_CFG: {
> -			struct io_pgtable_domain_attr *pgtbl_cfg = data;
> -			*pgtbl_cfg = smmu_domain->pgtbl_cfg;
> +	if (domain->type != IOMMU_DOMAIN_DMA)
> +		return false;
> +	return smmu_domain->pgtbl_cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT;
> +}
>   
> -			return 0;
> -		}
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: {
> -			bool non_strict = smmu_domain->pgtbl_cfg.quirks &
> -					  IO_PGTABLE_QUIRK_NON_STRICT;
> -			*(int *)data = non_strict;
> -			return 0;
> -		}
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> +static void arm_smmu_dma_enable_flush_queue(struct iommu_domain *domain)
> +{
> +	if (domain->type != IOMMU_DOMAIN_DMA)
> +		return;
> +	to_smmu_domain(domain)->pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
>   }
>   
>   static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> @@ -1557,16 +1535,7 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
>   		}
>   		break;
>   	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			if (*(int *)data)
> -				smmu_domain->pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
> -			else
> -				smmu_domain->pgtbl_cfg.quirks &= ~IO_PGTABLE_QUIRK_NON_STRICT;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> +		ret = -ENODEV;
>   		break;
>   	default:
>   		ret = -EINVAL;
> @@ -1631,7 +1600,8 @@ static struct iommu_ops arm_smmu_ops = {
>   	.probe_device		= arm_smmu_probe_device,
>   	.release_device		= arm_smmu_release_device,
>   	.device_group		= arm_smmu_device_group,
> -	.domain_get_attr	= arm_smmu_domain_get_attr,
> +	.dma_use_flush_queue	= arm_smmu_dma_use_flush_queue,
> +	.dma_enable_flush_queue	= arm_smmu_dma_enable_flush_queue,
>   	.domain_set_attr	= arm_smmu_domain_set_attr,
>   	.of_xlate		= arm_smmu_of_xlate,
>   	.get_resv_regions	= arm_smmu_get_resv_regions,
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 9ab6ee22c11088..d3fe5aad9d6ecf 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -305,8 +305,8 @@ static void iommu_dma_flush_iotlb_all(struct iova_domain *iovad)
>   	cookie = container_of(iovad, struct iommu_dma_cookie, iovad);
>   	domain = cookie->fq_domain;
>   	/*
> -	 * The IOMMU driver supporting DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE
> -	 * implies that ops->flush_iotlb_all must be non-NULL.
> +	 * The IOMMU driver supporting a DMA flush queue implies that
> +	 * ops->flush_iotlb_all must be non-NULL.
>   	 */
>   	domain->ops->flush_iotlb_all(domain);
>   }
> @@ -329,7 +329,6 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
>   	struct iommu_dma_cookie *cookie = domain->iova_cookie;
>   	unsigned long order, base_pfn;
>   	struct iova_domain *iovad;
> -	int attr;
>   
>   	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
>   		return -EINVAL;
> @@ -365,8 +364,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
>   
>   	init_iova_domain(iovad, 1UL << order, base_pfn);
>   
> -	if (!cookie->fq_domain && !iommu_domain_get_attr(domain,
> -			DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) && attr) {
> +	if (!cookie->fq_domain && iommu_dma_use_flush_queue(domain)) {
>   		if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
>   					  iommu_dma_entry_dtor))
>   			pr_warn("iova flush queue initialization failed\n");
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index ee0932307d646b..eaa80c33f4bc91 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -5453,13 +5453,13 @@ intel_iommu_domain_set_attr(struct iommu_domain *domain,
>   	return ret;
>   }
>   
> -static bool domain_use_flush_queue(void)
> +static bool intel_iommu_dma_use_flush_queue(struct iommu_domain *domain)
>   {
>   	struct dmar_drhd_unit *drhd;
>   	struct intel_iommu *iommu;
>   	bool r = true;
>   
> -	if (intel_iommu_strict)
> +	if (domain->type != IOMMU_DOMAIN_DMA || intel_iommu_strict)
>   		return false;
>   
>   	/*
> @@ -5483,27 +5483,6 @@ static bool domain_use_flush_queue(void)
>   	return r;
>   }
>   
> -static int
> -intel_iommu_domain_get_attr(struct iommu_domain *domain,
> -			    enum iommu_attr attr, void *data)
> -{
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		return -ENODEV;
> -	case IOMMU_DOMAIN_DMA:
> -		switch (attr) {
> -		case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE:
> -			*(int *)data = domain_use_flush_queue();
> -			return 0;
> -		default:
> -			return -ENODEV;
> -		}
> -		break;
> -	default:
> -		return -EINVAL;
> -	}
> -}
> -
>   /*
>    * Check that the device does not live on an external facing PCI port that is
>    * marked as untrusted. Such devices should not be able to apply quirks and
> @@ -5576,7 +5555,7 @@ const struct iommu_ops intel_iommu_ops = {
>   	.capable		= intel_iommu_capable,
>   	.domain_alloc		= intel_iommu_domain_alloc,
>   	.domain_free		= intel_iommu_domain_free,
> -	.domain_get_attr        = intel_iommu_domain_get_attr,
> +	.dma_use_flush_queue	= intel_iommu_dma_use_flush_queue,
>   	.domain_set_attr	= intel_iommu_domain_set_attr,
>   	.attach_dev		= intel_iommu_attach_device,
>   	.detach_dev		= intel_iommu_detach_device,
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 23daaea7883b75..0f12c4d58cdc42 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1512,12 +1512,8 @@ static int iommu_group_alloc_default_domain(struct bus_type *bus,
>   	if (!group->domain)
>   		group->domain = dom;
>   
> -	if (!iommu_dma_strict) {
> -		int attr = 1;
> -		iommu_domain_set_attr(dom,
> -				      DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE,
> -				      &attr);
> -	}
> +	if (!iommu_dma_strict && dom->ops->dma_enable_flush_queue)
> +		dom->ops->dma_enable_flush_queue(dom);
>   
>   	return 0;
>   }
> @@ -2664,14 +2660,13 @@ static int __init iommu_init(void)
>   }
>   core_initcall(iommu_init);
>   
> -int iommu_domain_get_attr(struct iommu_domain *domain,
> -			  enum iommu_attr attr, void *data)
> +bool iommu_dma_use_flush_queue(struct iommu_domain *domain)
>   {
> -	if (!domain->ops->domain_get_attr)
> -		return -EINVAL;
> -	return domain->ops->domain_get_attr(domain, attr, data);
> +	if (!domain->ops->dma_use_flush_queue)
> +		return false;
> +	return domain->ops->dma_use_flush_queue(domain);
>   }
> -EXPORT_SYMBOL_GPL(iommu_domain_get_attr);
> +EXPORT_SYMBOL_GPL(iommu_dma_use_flush_queue);
>   
>   int iommu_domain_set_attr(struct iommu_domain *domain,
>   			  enum iommu_attr attr, void *data)
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index c15a8658daad64..f30de33c6ff56e 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -108,7 +108,6 @@ enum iommu_cap {
>   
>   enum iommu_attr {
>   	DOMAIN_ATTR_NESTING,	/* two stages of translation */
> -	DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE,
>   	DOMAIN_ATTR_IO_PGTABLE_CFG,
>   	DOMAIN_ATTR_MAX,
>   };
> @@ -194,7 +193,8 @@ struct iommu_iotlb_gather {
>    * @probe_finalize: Do final setup work after the device is added to an IOMMU
>    *                  group and attached to the groups domain
>    * @device_group: find iommu group for a particular device
> - * @domain_get_attr: Query domain attributes
> + * @dma_use_flush_queue: Returns %true if a DMA flush queue is used
> + * @dma_enable_flush_queue: Try to enable the DMA flush queue
>    * @domain_set_attr: Change domain attributes
>    * @get_resv_regions: Request list of reserved regions for a device
>    * @put_resv_regions: Free list of reserved regions for a device
> @@ -244,8 +244,8 @@ struct iommu_ops {
>   	void (*release_device)(struct device *dev);
>   	void (*probe_finalize)(struct device *dev);
>   	struct iommu_group *(*device_group)(struct device *dev);
> -	int (*domain_get_attr)(struct iommu_domain *domain,
> -			       enum iommu_attr attr, void *data);
> +	bool (*dma_use_flush_queue)(struct iommu_domain *domain);
> +	void (*dma_enable_flush_queue)(struct iommu_domain *domain);
>   	int (*domain_set_attr)(struct iommu_domain *domain,
>   			       enum iommu_attr attr, void *data);
>   
> @@ -491,8 +491,7 @@ extern int iommu_page_response(struct device *dev,
>   extern int iommu_group_id(struct iommu_group *group);
>   extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *);
>   
> -extern int iommu_domain_get_attr(struct iommu_domain *domain, enum iommu_attr,
> -				 void *data);
> +bool iommu_dma_use_flush_queue(struct iommu_domain *domain);
>   extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr,
>   				 void *data);
>   
> @@ -861,12 +860,6 @@ static inline int iommu_group_id(struct iommu_group *group)
>   	return -ENODEV;
>   }
>   
> -static inline int iommu_domain_get_attr(struct iommu_domain *domain,
> -					enum iommu_attr attr, void *data)
> -{
> -	return -EINVAL;
> -}
> -
>   static inline int iommu_domain_set_attr(struct iommu_domain *domain,
>   					enum iommu_attr attr, void *data)
>   {
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 16/17] iommu: remove DOMAIN_ATTR_IO_PGTABLE_CFG
       [not found] ` <20210301084257.945454-17-hch@lst.de>
@ 2021-03-04 15:48   ` Robin Murphy
  2021-03-04 23:11     ` [Freedreno] " Rob Clark
  0 siblings, 1 reply; 13+ messages in thread
From: Robin Murphy @ 2021-03-04 15:48 UTC (permalink / raw)
  To: Christoph Hellwig, Joerg Roedel, Will Deacon, Li Yang
  Cc: kvm, Michael Ellerman, linux-arm-msm, linuxppc-dev, dri-devel,
	virtualization, iommu, netdev, freedreno, David Woodhouse,
	linux-arm-kernel

On 2021-03-01 08:42, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Moreso than the previous patch, where the feature is at least relatively 
generic (note that there's a bunch of in-flight development around 
DOMAIN_ATTR_NESTING), I'm really not convinced that it's beneficial to 
bloat the generic iommu_ops structure with private driver-specific 
interfaces. The attribute interface is a great compromise for these 
kinds of things, and you can easily add type-checked wrappers around it 
for external callers (maybe even make the actual attributes internal 
between the IOMMU core and drivers) if that's your concern.

Robin.

> ---
>   drivers/gpu/drm/msm/adreno/adreno_gpu.c |  2 +-
>   drivers/iommu/arm/arm-smmu/arm-smmu.c   | 40 +++++++------------------
>   drivers/iommu/iommu.c                   |  9 ++++++
>   include/linux/iommu.h                   |  9 +++++-
>   4 files changed, 29 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 0f184c3dd9d9ec..78d98ab2ee3a68 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -191,7 +191,7 @@ void adreno_set_llc_attributes(struct iommu_domain *iommu)
>   	struct io_pgtable_domain_attr pgtbl_cfg;
>   
>   	pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_ARM_OUTER_WBWA;
> -	iommu_domain_set_attr(iommu, DOMAIN_ATTR_IO_PGTABLE_CFG, &pgtbl_cfg);
> +	iommu_domain_set_pgtable_attr(iommu, &pgtbl_cfg);
>   }
>   
>   struct msm_gem_address_space *
> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> index 2e17d990d04481..2858999c86dfd1 100644
> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> @@ -1515,40 +1515,22 @@ static int arm_smmu_domain_enable_nesting(struct iommu_domain *domain)
>   	return ret;
>   }
>   
> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> +static int arm_smmu_domain_set_pgtable_attr(struct iommu_domain *domain,
> +		struct io_pgtable_domain_attr *pgtbl_cfg)
>   {
> -	int ret = 0;
>   	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	int ret = -EPERM;
>   
> -	mutex_lock(&smmu_domain->init_mutex);
> -
> -	switch(domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_IO_PGTABLE_CFG: {
> -			struct io_pgtable_domain_attr *pgtbl_cfg = data;
> -
> -			if (smmu_domain->smmu) {
> -				ret = -EPERM;
> -				goto out_unlock;
> -			}
> +	if (domain->type != IOMMU_DOMAIN_UNMANAGED)
> +		return -EINVAL;
>   
> -			smmu_domain->pgtbl_cfg = *pgtbl_cfg;
> -			break;
> -		}
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		ret = -ENODEV;
> -		break;
> -	default:
> -		ret = -EINVAL;
> +	mutex_lock(&smmu_domain->init_mutex);
> +	if (!smmu_domain->smmu) {
> +		smmu_domain->pgtbl_cfg = *pgtbl_cfg;
> +		ret = 0;
>   	}
> -out_unlock:
>   	mutex_unlock(&smmu_domain->init_mutex);
> +
>   	return ret;
>   }
>   
> @@ -1609,7 +1591,7 @@ static struct iommu_ops arm_smmu_ops = {
>   	.device_group		= arm_smmu_device_group,
>   	.dma_use_flush_queue	= arm_smmu_dma_use_flush_queue,
>   	.dma_enable_flush_queue	= arm_smmu_dma_enable_flush_queue,
> -	.domain_set_attr	= arm_smmu_domain_set_attr,
> +	.domain_set_pgtable_attr = arm_smmu_domain_set_pgtable_attr,
>   	.domain_enable_nesting	= arm_smmu_domain_enable_nesting,
>   	.of_xlate		= arm_smmu_of_xlate,
>   	.get_resv_regions	= arm_smmu_get_resv_regions,
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 2e9e058501a953..8490aefd4b41f8 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2693,6 +2693,15 @@ int iommu_domain_enable_nesting(struct iommu_domain *domain)
>   }
>   EXPORT_SYMBOL_GPL(iommu_domain_enable_nesting);
>   
> +int iommu_domain_set_pgtable_attr(struct iommu_domain *domain,
> +		struct io_pgtable_domain_attr *pgtbl_cfg)
> +{
> +	if (!domain->ops->domain_set_pgtable_attr)
> +		return -EINVAL;
> +	return domain->ops->domain_set_pgtable_attr(domain, pgtbl_cfg);
> +}
> +EXPORT_SYMBOL_GPL(iommu_domain_set_pgtable_attr);
> +
>   void iommu_get_resv_regions(struct device *dev, struct list_head *list)
>   {
>   	const struct iommu_ops *ops = dev->bus->iommu_ops;
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index aed88aa3bd3edf..39d3ed4d2700ac 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -40,6 +40,7 @@ struct iommu_domain;
>   struct notifier_block;
>   struct iommu_sva;
>   struct iommu_fault_event;
> +struct io_pgtable_domain_attr;
>   
>   /* iommu fault flags */
>   #define IOMMU_FAULT_READ	0x0
> @@ -107,7 +108,6 @@ enum iommu_cap {
>    */
>   
>   enum iommu_attr {
> -	DOMAIN_ATTR_IO_PGTABLE_CFG,
>   	DOMAIN_ATTR_MAX,
>   };
>   
> @@ -196,6 +196,7 @@ struct iommu_iotlb_gather {
>    * @dma_enable_flush_queue: Try to enable the DMA flush queue
>    * @domain_set_attr: Change domain attributes
>    * @domain_enable_nesting: Enable nesting
> + * @domain_set_pgtable_attr: Set io page table attributes
>    * @get_resv_regions: Request list of reserved regions for a device
>    * @put_resv_regions: Free list of reserved regions for a device
>    * @apply_resv_region: Temporary helper call-back for iova reserved ranges
> @@ -249,6 +250,8 @@ struct iommu_ops {
>   	int (*domain_set_attr)(struct iommu_domain *domain,
>   			       enum iommu_attr attr, void *data);
>   	int (*domain_enable_nesting)(struct iommu_domain *domain);
> +	int (*domain_set_pgtable_attr)(struct iommu_domain *domain,
> +			struct io_pgtable_domain_attr *pgtbl_cfg);
>   
>   	/* Request/Free a list of reserved regions for a device */
>   	void (*get_resv_regions)(struct device *dev, struct list_head *list);
> @@ -493,9 +496,13 @@ extern int iommu_group_id(struct iommu_group *group);
>   extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *);
>   
>   bool iommu_dma_use_flush_queue(struct iommu_domain *domain);
> +int iommu_domain_set_pgtable_attr(struct iommu_domain *domain,
> +		struct io_pgtable_domain_attr *pgtbl_cfg);
>   extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr,
>   				 void *data);
>   int iommu_domain_enable_nesting(struct iommu_domain *domain);
> +int iommu_domain_set_pgtable_attr(struct iommu_domain *domain,
> +		struct io_pgtable_domain_attr *pgtbl_cfg);
>   
>   extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
>   			      unsigned long iova, int flags);
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Freedreno] [PATCH 16/17] iommu: remove DOMAIN_ATTR_IO_PGTABLE_CFG
  2021-03-04 15:48   ` [PATCH 16/17] iommu: remove DOMAIN_ATTR_IO_PGTABLE_CFG Robin Murphy
@ 2021-03-04 23:11     ` Rob Clark
  2021-03-05 10:00       ` Will Deacon
  0 siblings, 1 reply; 13+ messages in thread
From: Rob Clark @ 2021-03-04 23:11 UTC (permalink / raw)
  To: Robin Murphy
  Cc: freedreno, Sai Prakash Ranjan, kvm, Michael Ellerman,
	Joerg Roedel, linuxppc-dev, dri-devel, Li Yang,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	netdev, David Woodhouse, linux-arm-msm, virtualization,
	Will Deacon, Christoph Hellwig,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE

On Thu, Mar 4, 2021 at 7:48 AM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-03-01 08:42, Christoph Hellwig wrote:
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
>
> Moreso than the previous patch, where the feature is at least relatively
> generic (note that there's a bunch of in-flight development around
> DOMAIN_ATTR_NESTING), I'm really not convinced that it's beneficial to
> bloat the generic iommu_ops structure with private driver-specific
> interfaces. The attribute interface is a great compromise for these
> kinds of things, and you can easily add type-checked wrappers around it
> for external callers (maybe even make the actual attributes internal
> between the IOMMU core and drivers) if that's your concern.

I suppose if this is *just* for the GPU we could move it into adreno_smmu_priv..

But one thing I'm not sure about is whether
IO_PGTABLE_QUIRK_ARM_OUTER_WBWA is something that other devices
*should* be using as well, but just haven't gotten around to yet.

BR,
-R

> Robin.
>
> > ---
> >   drivers/gpu/drm/msm/adreno/adreno_gpu.c |  2 +-
> >   drivers/iommu/arm/arm-smmu/arm-smmu.c   | 40 +++++++------------------
> >   drivers/iommu/iommu.c                   |  9 ++++++
> >   include/linux/iommu.h                   |  9 +++++-
> >   4 files changed, 29 insertions(+), 31 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > index 0f184c3dd9d9ec..78d98ab2ee3a68 100644
> > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > @@ -191,7 +191,7 @@ void adreno_set_llc_attributes(struct iommu_domain *iommu)
> >       struct io_pgtable_domain_attr pgtbl_cfg;
> >
> >       pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_ARM_OUTER_WBWA;
> > -     iommu_domain_set_attr(iommu, DOMAIN_ATTR_IO_PGTABLE_CFG, &pgtbl_cfg);
> > +     iommu_domain_set_pgtable_attr(iommu, &pgtbl_cfg);
> >   }
> >
> >   struct msm_gem_address_space *
> > diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> > index 2e17d990d04481..2858999c86dfd1 100644
> > --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> > +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> > @@ -1515,40 +1515,22 @@ static int arm_smmu_domain_enable_nesting(struct iommu_domain *domain)
> >       return ret;
> >   }
> >
> > -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> > -                                 enum iommu_attr attr, void *data)
> > +static int arm_smmu_domain_set_pgtable_attr(struct iommu_domain *domain,
> > +             struct io_pgtable_domain_attr *pgtbl_cfg)
> >   {
> > -     int ret = 0;
> >       struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> > +     int ret = -EPERM;
> >
> > -     mutex_lock(&smmu_domain->init_mutex);
> > -
> > -     switch(domain->type) {
> > -     case IOMMU_DOMAIN_UNMANAGED:
> > -             switch (attr) {
> > -             case DOMAIN_ATTR_IO_PGTABLE_CFG: {
> > -                     struct io_pgtable_domain_attr *pgtbl_cfg = data;
> > -
> > -                     if (smmu_domain->smmu) {
> > -                             ret = -EPERM;
> > -                             goto out_unlock;
> > -                     }
> > +     if (domain->type != IOMMU_DOMAIN_UNMANAGED)
> > +             return -EINVAL;
> >
> > -                     smmu_domain->pgtbl_cfg = *pgtbl_cfg;
> > -                     break;
> > -             }
> > -             default:
> > -                     ret = -ENODEV;
> > -             }
> > -             break;
> > -     case IOMMU_DOMAIN_DMA:
> > -             ret = -ENODEV;
> > -             break;
> > -     default:
> > -             ret = -EINVAL;
> > +     mutex_lock(&smmu_domain->init_mutex);
> > +     if (!smmu_domain->smmu) {
> > +             smmu_domain->pgtbl_cfg = *pgtbl_cfg;
> > +             ret = 0;
> >       }
> > -out_unlock:
> >       mutex_unlock(&smmu_domain->init_mutex);
> > +
> >       return ret;
> >   }
> >
> > @@ -1609,7 +1591,7 @@ static struct iommu_ops arm_smmu_ops = {
> >       .device_group           = arm_smmu_device_group,
> >       .dma_use_flush_queue    = arm_smmu_dma_use_flush_queue,
> >       .dma_enable_flush_queue = arm_smmu_dma_enable_flush_queue,
> > -     .domain_set_attr        = arm_smmu_domain_set_attr,
> > +     .domain_set_pgtable_attr = arm_smmu_domain_set_pgtable_attr,
> >       .domain_enable_nesting  = arm_smmu_domain_enable_nesting,
> >       .of_xlate               = arm_smmu_of_xlate,
> >       .get_resv_regions       = arm_smmu_get_resv_regions,
> > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> > index 2e9e058501a953..8490aefd4b41f8 100644
> > --- a/drivers/iommu/iommu.c
> > +++ b/drivers/iommu/iommu.c
> > @@ -2693,6 +2693,15 @@ int iommu_domain_enable_nesting(struct iommu_domain *domain)
> >   }
> >   EXPORT_SYMBOL_GPL(iommu_domain_enable_nesting);
> >
> > +int iommu_domain_set_pgtable_attr(struct iommu_domain *domain,
> > +             struct io_pgtable_domain_attr *pgtbl_cfg)
> > +{
> > +     if (!domain->ops->domain_set_pgtable_attr)
> > +             return -EINVAL;
> > +     return domain->ops->domain_set_pgtable_attr(domain, pgtbl_cfg);
> > +}
> > +EXPORT_SYMBOL_GPL(iommu_domain_set_pgtable_attr);
> > +
> >   void iommu_get_resv_regions(struct device *dev, struct list_head *list)
> >   {
> >       const struct iommu_ops *ops = dev->bus->iommu_ops;
> > diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> > index aed88aa3bd3edf..39d3ed4d2700ac 100644
> > --- a/include/linux/iommu.h
> > +++ b/include/linux/iommu.h
> > @@ -40,6 +40,7 @@ struct iommu_domain;
> >   struct notifier_block;
> >   struct iommu_sva;
> >   struct iommu_fault_event;
> > +struct io_pgtable_domain_attr;
> >
> >   /* iommu fault flags */
> >   #define IOMMU_FAULT_READ    0x0
> > @@ -107,7 +108,6 @@ enum iommu_cap {
> >    */
> >
> >   enum iommu_attr {
> > -     DOMAIN_ATTR_IO_PGTABLE_CFG,
> >       DOMAIN_ATTR_MAX,
> >   };
> >
> > @@ -196,6 +196,7 @@ struct iommu_iotlb_gather {
> >    * @dma_enable_flush_queue: Try to enable the DMA flush queue
> >    * @domain_set_attr: Change domain attributes
> >    * @domain_enable_nesting: Enable nesting
> > + * @domain_set_pgtable_attr: Set io page table attributes
> >    * @get_resv_regions: Request list of reserved regions for a device
> >    * @put_resv_regions: Free list of reserved regions for a device
> >    * @apply_resv_region: Temporary helper call-back for iova reserved ranges
> > @@ -249,6 +250,8 @@ struct iommu_ops {
> >       int (*domain_set_attr)(struct iommu_domain *domain,
> >                              enum iommu_attr attr, void *data);
> >       int (*domain_enable_nesting)(struct iommu_domain *domain);
> > +     int (*domain_set_pgtable_attr)(struct iommu_domain *domain,
> > +                     struct io_pgtable_domain_attr *pgtbl_cfg);
> >
> >       /* Request/Free a list of reserved regions for a device */
> >       void (*get_resv_regions)(struct device *dev, struct list_head *list);
> > @@ -493,9 +496,13 @@ extern int iommu_group_id(struct iommu_group *group);
> >   extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *);
> >
> >   bool iommu_dma_use_flush_queue(struct iommu_domain *domain);
> > +int iommu_domain_set_pgtable_attr(struct iommu_domain *domain,
> > +             struct io_pgtable_domain_attr *pgtbl_cfg);
> >   extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr,
> >                                void *data);
> >   int iommu_domain_enable_nesting(struct iommu_domain *domain);
> > +int iommu_domain_set_pgtable_attr(struct iommu_domain *domain,
> > +             struct io_pgtable_domain_attr *pgtbl_cfg);
> >
> >   extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
> >                             unsigned long iova, int flags);
> >
> _______________________________________________
> Freedreno mailing list
> Freedreno@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/freedreno
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Freedreno] [PATCH 16/17] iommu: remove DOMAIN_ATTR_IO_PGTABLE_CFG
  2021-03-04 23:11     ` [Freedreno] " Rob Clark
@ 2021-03-05 10:00       ` Will Deacon
       [not found]         ` <20210310085806.GB5928@lst.de>
  0 siblings, 1 reply; 13+ messages in thread
From: Will Deacon @ 2021-03-05 10:00 UTC (permalink / raw)
  To: Rob Clark
  Cc: freedreno, Sai Prakash Ranjan, kvm, Michael Ellerman,
	Joerg Roedel, linuxppc-dev, dri-devel, Li Yang,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	netdev, David Woodhouse, linux-arm-msm, virtualization,
	Robin Murphy, Christoph Hellwig,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE

On Thu, Mar 04, 2021 at 03:11:08PM -0800, Rob Clark wrote:
> On Thu, Mar 4, 2021 at 7:48 AM Robin Murphy <robin.murphy@arm.com> wrote:
> >
> > On 2021-03-01 08:42, Christoph Hellwig wrote:
> > > Signed-off-by: Christoph Hellwig <hch@lst.de>
> >
> > Moreso than the previous patch, where the feature is at least relatively
> > generic (note that there's a bunch of in-flight development around
> > DOMAIN_ATTR_NESTING), I'm really not convinced that it's beneficial to
> > bloat the generic iommu_ops structure with private driver-specific
> > interfaces. The attribute interface is a great compromise for these
> > kinds of things, and you can easily add type-checked wrappers around it
> > for external callers (maybe even make the actual attributes internal
> > between the IOMMU core and drivers) if that's your concern.
> 
> I suppose if this is *just* for the GPU we could move it into adreno_smmu_priv..
> 
> But one thing I'm not sure about is whether
> IO_PGTABLE_QUIRK_ARM_OUTER_WBWA is something that other devices
> *should* be using as well, but just haven't gotten around to yet.

The intention is certainly that this would be a place to collate per-domain
pgtable quirks, so I'd prefer not to tie that to the GPU.

Will
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: cleanup unused or almost unused IOMMU APIs and the FSL PAMU driver
       [not found] <20210301084257.945454-1-hch@lst.de>
                   ` (2 preceding siblings ...)
       [not found] ` <20210301084257.945454-17-hch@lst.de>
@ 2021-03-06  0:03 ` Li Yang
       [not found] ` <20210301084257.945454-16-hch@lst.de>
  4 siblings, 0 replies; 13+ messages in thread
From: Li Yang @ 2021-03-06  0:03 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, Will Deacon, Joerg Roedel, linuxppc-dev, dri-devel,
	virtualization, Linux IOMMU,
	moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE, Netdev,
	freedreno, David Woodhouse, linux-arm-msm, Lu Baolu

On Mon, Mar 1, 2021 at 2:44 AM Christoph Hellwig <hch@lst.de> wrote:
>
> Hi all,
>
> there are a bunch of IOMMU APIs that are entirely unused, or only used as
> a private communication channel between the FSL PAMU driver and it's only
> consumer, the qbman portal driver.
>
> So this series drops a huge chunk of entirely unused FSL PAMU
> functionality, then drops all kinds of unused IOMMU APIs, and then
> replaces what is left of the iommu_attrs with properly typed, smaller
> and easier to use specific APIs.

It looks like the unused APIs were added for functionality that were
never completed later on.  So

Acked-by: Li Yang <leoyang.li@nxp.com>

>
> Diffstat:
>  arch/powerpc/include/asm/fsl_pamu_stash.h   |   12
>  drivers/gpu/drm/msm/adreno/adreno_gpu.c     |    2
>  drivers/iommu/amd/iommu.c                   |   23
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c |   85 ---
>  drivers/iommu/arm/arm-smmu/arm-smmu.c       |  122 +---
>  drivers/iommu/dma-iommu.c                   |    8
>  drivers/iommu/fsl_pamu.c                    |  264 ----------
>  drivers/iommu/fsl_pamu.h                    |   10
>  drivers/iommu/fsl_pamu_domain.c             |  694 ++--------------------------
>  drivers/iommu/fsl_pamu_domain.h             |   46 -
>  drivers/iommu/intel/iommu.c                 |   55 --
>  drivers/iommu/iommu.c                       |   75 ---
>  drivers/soc/fsl/qbman/qman_portal.c         |   56 --
>  drivers/vfio/vfio_iommu_type1.c             |   31 -
>  drivers/vhost/vdpa.c                        |   10
>  include/linux/iommu.h                       |   81 ---
>  16 files changed, 214 insertions(+), 1360 deletions(-)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Freedreno] [PATCH 16/17] iommu: remove DOMAIN_ATTR_IO_PGTABLE_CFG
       [not found]         ` <20210310085806.GB5928@lst.de>
@ 2021-03-10 10:55           ` Will Deacon
  0 siblings, 0 replies; 13+ messages in thread
From: Will Deacon @ 2021-03-10 10:55 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: freedreno, Sai Prakash Ranjan, kvm, Michael Ellerman,
	Joerg Roedel, linuxppc-dev, dri-devel, Li Yang,
	list@263.net:IOMMU DRIVERS
	<iommu@lists.linux-foundation.org>,
	Joerg Roedel <joro@8bytes.org>, ,
	David Woodhouse, linux-arm-msm, virtualization, Robin Murphy,
	netdev, moderated list:ARM/FREESCALE IMX / MXC ARM ARCHITECTURE

On Wed, Mar 10, 2021 at 09:58:06AM +0100, Christoph Hellwig wrote:
> On Fri, Mar 05, 2021 at 10:00:12AM +0000, Will Deacon wrote:
> > > But one thing I'm not sure about is whether
> > > IO_PGTABLE_QUIRK_ARM_OUTER_WBWA is something that other devices
> > > *should* be using as well, but just haven't gotten around to yet.
> > 
> > The intention is certainly that this would be a place to collate per-domain
> > pgtable quirks, so I'd prefer not to tie that to the GPU.
> 
> So the overall consensus is to just keep this as-is for now?

Yes, please. If it doesn't see wider adoption then we can revisit it.

Will
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 14/17] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE
       [not found]       ` <20210310092533.GA6819@lst.de>
@ 2021-03-10 18:39         ` Robin Murphy
       [not found]           ` <20210311082609.GA6990@lst.de>
  0 siblings, 1 reply; 13+ messages in thread
From: Robin Murphy @ 2021-03-10 18:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, Will Deacon, Joerg Roedel, linuxppc-dev, dri-devel, Li Yang,
	iommu, netdev, linux-arm-kernel, Michael Ellerman,
	virtualization, freedreno, David Woodhouse, linux-arm-msm

On 2021-03-10 09:25, Christoph Hellwig wrote:
> On Wed, Mar 10, 2021 at 10:15:01AM +0100, Christoph Hellwig wrote:
>> On Thu, Mar 04, 2021 at 03:25:27PM +0000, Robin Murphy wrote:
>>> On 2021-03-01 08:42, Christoph Hellwig wrote:
>>>> Use explicit methods for setting and querying the information instead.
>>>
>>> Now that everyone's using iommu-dma, is there any point in bouncing this
>>> through the drivers at all? Seems like it would make more sense for the x86
>>> drivers to reflect their private options back to iommu_dma_strict (and
>>> allow Intel's caching mode to override it as well), then have
>>> iommu_dma_init_domain just test !iommu_dma_strict &&
>>> domain->ops->flush_iotlb_all.
>>
>> Hmm.  I looked at this, and kill off ->dma_enable_flush_queue for
>> the ARM drivers and just looking at iommu_dma_strict seems like a
>> very clear win.
>>
>> OTOH x86 is a little more complicated.  AMD and intel defaul to lazy
>> mode, so we'd have to change the global iommu_dma_strict if they are
>> initialized.  Also Intel has not only a "static" option to disable
>> lazy mode, but also a "dynamic" one where it iterates structure.  So
>> I think on the get side we're stuck with the method, but it still
>> simplifies the whole thing.
> 
> Actually... Just mirroring the iommu_dma_strict value into
> struct iommu_domain should solve all of that with very little
> boilerplate code.

Yes, my initial thought was to directly replace the attribute with a
common flag at iommu_domain level, but since in all cases the behaviour
is effectively global rather than actually per-domain, it seemed
reasonable to take it a step further. This passes compile-testing for
arm64 and x86, what do you think?

Robin.

----->8-----
Subject: [PATCH] iommu: Consolidate strict invalidation handling

Now that everyone is using iommu-dma, the global invalidation policy
really doesn't need to be woven through several parts of the core API
and individual drivers, we can just look it up directly at the one point
that we now make the flush queue decision. If the x86 drivers reflect
their internal options and overrides back to iommu_dma_strict, that can
become the canonical source.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
  drivers/iommu/amd/iommu.c   |  2 ++
  drivers/iommu/dma-iommu.c   |  8 +-------
  drivers/iommu/intel/iommu.c | 12 ++++++++++++
  drivers/iommu/iommu.c       | 35 +++++++++++++++++++++++++++--------
  include/linux/iommu.h       |  2 ++
  5 files changed, 44 insertions(+), 15 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index a69a8b573e40..1db29e59d468 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1856,6 +1856,8 @@ int __init amd_iommu_init_dma_ops(void)
  	else
  		pr_info("Lazy IO/TLB flushing enabled\n");
  
+	iommu_set_dma_strict(amd_iommu_unmap_flush);
+
  	return 0;
  
  }
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index af765c813cc8..789a950cc125 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -304,10 +304,6 @@ static void iommu_dma_flush_iotlb_all(struct iova_domain *iovad)
  
  	cookie = container_of(iovad, struct iommu_dma_cookie, iovad);
  	domain = cookie->fq_domain;
-	/*
-	 * The IOMMU driver supporting DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE
-	 * implies that ops->flush_iotlb_all must be non-NULL.
-	 */
  	domain->ops->flush_iotlb_all(domain);
  }
  
@@ -334,7 +330,6 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
  	struct iommu_dma_cookie *cookie = domain->iova_cookie;
  	unsigned long order, base_pfn;
  	struct iova_domain *iovad;
-	int attr;
  
  	if (!cookie || cookie->type != IOMMU_DMA_IOVA_COOKIE)
  		return -EINVAL;
@@ -371,8 +366,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
  	init_iova_domain(iovad, 1UL << order, base_pfn);
  
  	if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
-	    !iommu_domain_get_attr(domain, DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE, &attr) &&
-	    attr) {
+	    domain->ops->flush_iotlb_all && !iommu_get_dma_strict()) {
  		if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
  					  iommu_dma_entry_dtor))
  			pr_warn("iova flush queue initialization failed\n");
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index b5c746f0f63b..f5b452cd1266 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4377,6 +4377,17 @@ int __init intel_iommu_init(void)
  
  	down_read(&dmar_global_lock);
  	for_each_active_iommu(iommu, drhd) {
+		if (!intel_iommu_strict && cap_caching_mode(iommu->cap)) {
+			/*
+			 * The flush queue implementation does not perform page-selective
+			 * invalidations that are required for efficient TLB flushes in
+			 * virtual environments. The benefit of batching is likely to be
+			 * much lower than the overhead of synchronizing the virtual and
+			 * physical IOMMU page-tables.
+			 */
+			pr_warn("IOMMU batching is disabled due to virtualization");
+			intel_iommu_strict = 1;
+		}
  		iommu_device_sysfs_add(&iommu->iommu, NULL,
  				       intel_iommu_groups,
  				       "%s", iommu->name);
@@ -4384,6 +4395,7 @@ int __init intel_iommu_init(void)
  	}
  	up_read(&dmar_global_lock);
  
+	iommu_set_dma_strict(intel_iommu_strict);
  	bus_set_iommu(&pci_bus_type, &intel_iommu_ops);
  	if (si_domain && !hw_pass_through)
  		register_memory_notifier(&intel_iommu_memory_nb);
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 2f3399203813..80afcf50cd3c 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -69,6 +69,7 @@ static const char * const iommu_group_resv_type_string[] = {
  };
  
  #define IOMMU_CMD_LINE_DMA_API		BIT(0)
+#define IOMMU_CMD_LINE_STRICT		BIT(1)
  
  static void iommu_set_cmd_line_dma_api(void)
  {
@@ -80,6 +81,16 @@ static bool iommu_cmd_line_dma_api(void)
  	return !!(iommu_cmd_line & IOMMU_CMD_LINE_DMA_API);
  }
  
+static void iommu_set_cmd_line_strict(void)
+{
+	iommu_cmd_line |= IOMMU_CMD_LINE_STRICT;
+}
+
+static bool iommu_cmd_line_strict(void)
+{
+	return !!(iommu_cmd_line & IOMMU_CMD_LINE_STRICT);
+}
+
  static int iommu_alloc_default_domain(struct iommu_group *group,
  				      struct device *dev);
  static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
@@ -337,10 +348,25 @@ early_param("iommu.passthrough", iommu_set_def_domain_type);
  
  static int __init iommu_dma_setup(char *str)
  {
-	return kstrtobool(str, &iommu_dma_strict);
+	int ret = kstrtobool(str, &iommu_dma_strict);
+
+	if (!ret)
+		iommu_set_cmd_line_strict();
+	return ret;
  }
  early_param("iommu.strict", iommu_dma_setup);
  
+void iommu_set_dma_strict(bool val)
+{
+	if (val || !iommu_cmd_line_strict())
+		iommu_dma_strict = val;
+}
+
+bool iommu_get_dma_strict(void)
+{
+	return iommu_dma_strict;
+}
+
  static ssize_t iommu_group_attr_show(struct kobject *kobj,
  				     struct attribute *__attr, char *buf)
  {
@@ -1520,13 +1546,6 @@ static int iommu_group_alloc_default_domain(struct bus_type *bus,
  	if (!group->domain)
  		group->domain = dom;
  
-	if (!iommu_dma_strict) {
-		int attr = 1;
-		iommu_domain_set_attr(dom,
-				      DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE,
-				      &attr);
-	}
-
  	return 0;
  }
  
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index eb5e3f14c5ad..11bbfa273d98 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -495,6 +495,8 @@ extern int iommu_domain_get_attr(struct iommu_domain *domain, enum iommu_attr,
  				 void *data);
  extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr,
  				 void *data);
+extern void iommu_set_dma_strict(bool val);
+extern bool iommu_get_dma_strict(void);
  
  /* Window handling function prototypes */
  extern int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
-- 
2.21.0.dirty

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 14/17] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE
       [not found]           ` <20210311082609.GA6990@lst.de>
@ 2021-03-12 16:18             ` Robin Murphy
       [not found]               ` <20210315083347.GA28445@lst.de>
  0 siblings, 1 reply; 13+ messages in thread
From: Robin Murphy @ 2021-03-12 16:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, Will Deacon, Joerg Roedel, linuxppc-dev, dri-devel, Li Yang,
	iommu, netdev, linux-arm-kernel, Michael Ellerman,
	virtualization, freedreno, David Woodhouse, linux-arm-msm

On 2021-03-11 08:26, Christoph Hellwig wrote:
> On Wed, Mar 10, 2021 at 06:39:57PM +0000, Robin Murphy wrote:
>>> Actually... Just mirroring the iommu_dma_strict value into
>>> struct iommu_domain should solve all of that with very little
>>> boilerplate code.
>>
>> Yes, my initial thought was to directly replace the attribute with a
>> common flag at iommu_domain level, but since in all cases the behaviour
>> is effectively global rather than actually per-domain, it seemed
>> reasonable to take it a step further. This passes compile-testing for
>> arm64 and x86, what do you think?
> 
> It seems to miss a few bits, and also generally seems to be not actually
> apply to recent mainline or something like it due to different empty
> lines in a few places.

Yeah, that was sketched out on top of some other development patches, 
and in being so focused on not breaking any of the x86 behaviours I did 
indeed overlook fully converting the SMMU drivers... oops!

(my thought was to do the conversion for its own sake, then clean up the 
redundant attribute separately, but I guess it's fine either way)

> Let me know what you think of the version here:
> 
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/iommu-cleanup
> 
> I'll happily switch the patch to you as the author if you're fine with
> that as well.

I still have reservations about removing the attribute API entirely and 
pretending that io_pgtable_cfg is anything other than a SoC-specific 
private interface, but the reworked patch on its own looks reasonable to 
me, thanks! (I wasn't too convinced about the iommu_cmd_line wrappers 
either...) Just iommu_get_dma_strict() needs an export since the SMMU 
drivers can be modular - I consciously didn't add that myself since I 
was mistakenly thinking only iommu-dma would call it.

Robin.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 15/17] iommu: remove DOMAIN_ATTR_NESTING
       [not found] ` <20210301084257.945454-16-hch@lst.de>
@ 2021-03-14 10:44   ` Auger Eric
       [not found]     ` <20210314155813.GA788@lst.de>
  0 siblings, 1 reply; 13+ messages in thread
From: Auger Eric @ 2021-03-14 10:44 UTC (permalink / raw)
  To: Christoph Hellwig, Joerg Roedel, Will Deacon, Li Yang
  Cc: kvm, Michael Ellerman, linux-arm-msm, linuxppc-dev, dri-devel,
	virtualization, iommu, netdev, freedreno, David Woodhouse,
	linux-arm-kernel

Hi Christoph,

On 3/1/21 9:42 AM, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 40 ++++++---------------
>  drivers/iommu/arm/arm-smmu/arm-smmu.c       | 30 ++++++++++------
>  drivers/iommu/intel/iommu.c                 | 28 +++++----------
>  drivers/iommu/iommu.c                       |  8 +++++
>  drivers/vfio/vfio_iommu_type1.c             |  5 +--
>  include/linux/iommu.h                       |  4 ++-
>  6 files changed, 50 insertions(+), 65 deletions(-)

As mentionned by Robin, there are series planning to use
DOMAIN_ATTR_NESTING to get info about the nested caps of the iommu (ARM
and Intel):

[Patch v8 00/10] vfio: expose virtual Shared Virtual Addressing to VMs
patches 1, 2, 3

Is the plan to introduce a new domain_get_nesting_info ops then?

Thanks

Eric	


> 
> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> index bf96172e8c1f71..8e6fee3ea454d3 100644
> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> @@ -2466,41 +2466,21 @@ static void arm_smmu_dma_enable_flush_queue(struct iommu_domain *domain)
>  	to_smmu_domain(domain)->non_strict = true;
>  }
>  
> -static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
> -				    enum iommu_attr attr, void *data)
> +static int arm_smmu_domain_enable_nesting(struct iommu_domain *domain)
>  {
> -	int ret = 0;
>  	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	int ret = -EPERM;
>  
> -	mutex_lock(&smmu_domain->init_mutex);
> +	if (domain->type != IOMMU_DOMAIN_UNMANAGED)
> +		return -EINVAL;
>  
> -	switch (domain->type) {
> -	case IOMMU_DOMAIN_UNMANAGED:
> -		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			if (smmu_domain->smmu) {
> -				ret = -EPERM;
> -				goto out_unlock;
> -			}
> -
> -			if (*(int *)data)
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> -			else
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> -			break;
> -		default:
> -			ret = -ENODEV;
> -		}
> -		break;
> -	case IOMMU_DOMAIN_DMA:
> -		ret = -ENODEV;
> -		break;
> -	default:
> -		ret = -EINVAL;
> +	mutex_lock(&smmu_domain->init_mutex);
> +	if (!smmu_domain->smmu) {
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> +		ret = 0;
>  	}
> -
> -out_unlock:
>  	mutex_unlock(&smmu_domain->init_mutex);
> +
>  	return ret;
>  }
>  
> @@ -2603,7 +2583,7 @@ static struct iommu_ops arm_smmu_ops = {
>  	.device_group		= arm_smmu_device_group,
>  	.dma_use_flush_queue	= arm_smmu_dma_use_flush_queue,
>  	.dma_enable_flush_queue	= arm_smmu_dma_enable_flush_queue,
> -	.domain_set_attr	= arm_smmu_domain_set_attr,
> +	.domain_enable_nesting	= arm_smmu_domain_enable_nesting,
>  	.of_xlate		= arm_smmu_of_xlate,
>  	.get_resv_regions	= arm_smmu_get_resv_regions,
>  	.put_resv_regions	= generic_iommu_put_resv_regions,
> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> index e7893e96f5177a..2e17d990d04481 100644
> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
> @@ -1497,6 +1497,24 @@ static void arm_smmu_dma_enable_flush_queue(struct iommu_domain *domain)
>  	to_smmu_domain(domain)->pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
>  }
>  
> +static int arm_smmu_domain_enable_nesting(struct iommu_domain *domain)
> +{
> +	struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
> +	int ret = -EPERM;
> +	
> +	if (domain->type != IOMMU_DOMAIN_UNMANAGED)
> +		return -EINVAL;
> +
> +	mutex_lock(&smmu_domain->init_mutex);
> +	if (!smmu_domain->smmu) {
> +		smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> +		ret = 0;
> +	}
> +	mutex_unlock(&smmu_domain->init_mutex);
> +
> +	return ret;
> +}
> +
>  static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
>  				    enum iommu_attr attr, void *data)
>  {
> @@ -1508,17 +1526,6 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
>  	switch(domain->type) {
>  	case IOMMU_DOMAIN_UNMANAGED:
>  		switch (attr) {
> -		case DOMAIN_ATTR_NESTING:
> -			if (smmu_domain->smmu) {
> -				ret = -EPERM;
> -				goto out_unlock;
> -			}
> -
> -			if (*(int *)data)
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_NESTED;
> -			else
> -				smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
> -			break;
>  		case DOMAIN_ATTR_IO_PGTABLE_CFG: {
>  			struct io_pgtable_domain_attr *pgtbl_cfg = data;
>  
> @@ -1603,6 +1610,7 @@ static struct iommu_ops arm_smmu_ops = {
>  	.dma_use_flush_queue	= arm_smmu_dma_use_flush_queue,
>  	.dma_enable_flush_queue	= arm_smmu_dma_enable_flush_queue,
>  	.domain_set_attr	= arm_smmu_domain_set_attr,
> +	.domain_enable_nesting	= arm_smmu_domain_enable_nesting,
>  	.of_xlate		= arm_smmu_of_xlate,
>  	.get_resv_regions	= arm_smmu_get_resv_regions,
>  	.put_resv_regions	= generic_iommu_put_resv_regions,
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index eaa80c33f4bc91..0f1374d6612a60 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -5423,32 +5423,22 @@ static bool intel_iommu_is_attach_deferred(struct iommu_domain *domain,
>  }
>  
>  static int
> -intel_iommu_domain_set_attr(struct iommu_domain *domain,
> -			    enum iommu_attr attr, void *data)
> +intel_iommu_domain_enable_nesting(struct iommu_domain *domain)
>  {
>  	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
>  	unsigned long flags;
> -	int ret = 0;
> +	int ret = -ENODEV;
>  
>  	if (domain->type != IOMMU_DOMAIN_UNMANAGED)
>  		return -EINVAL;
>  
> -	switch (attr) {
> -	case DOMAIN_ATTR_NESTING:
> -		spin_lock_irqsave(&device_domain_lock, flags);
> -		if (nested_mode_support() &&
> -		    list_empty(&dmar_domain->devices)) {
> -			dmar_domain->flags |= DOMAIN_FLAG_NESTING_MODE;
> -			dmar_domain->flags &= ~DOMAIN_FLAG_USE_FIRST_LEVEL;
> -		} else {
> -			ret = -ENODEV;
> -		}
> -		spin_unlock_irqrestore(&device_domain_lock, flags);
> -		break;
> -	default:
> -		ret = -EINVAL;
> -		break;
> +	spin_lock_irqsave(&device_domain_lock, flags);
> +	if (nested_mode_support() && list_empty(&dmar_domain->devices)) {
> +		dmar_domain->flags |= DOMAIN_FLAG_NESTING_MODE;
> +		dmar_domain->flags &= ~DOMAIN_FLAG_USE_FIRST_LEVEL;
> +		ret = 0;
>  	}
> +	spin_unlock_irqrestore(&device_domain_lock, flags);
>  
>  	return ret;
>  }
> @@ -5556,7 +5546,7 @@ const struct iommu_ops intel_iommu_ops = {
>  	.domain_alloc		= intel_iommu_domain_alloc,
>  	.domain_free		= intel_iommu_domain_free,
>  	.dma_use_flush_queue	= intel_iommu_dma_use_flush_queue,
> -	.domain_set_attr	= intel_iommu_domain_set_attr,
> +	.domain_enable_nesting	= intel_iommu_domain_enable_nesting,
>  	.attach_dev		= intel_iommu_attach_device,
>  	.detach_dev		= intel_iommu_detach_device,
>  	.aux_attach_dev		= intel_iommu_aux_attach_device,
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 0f12c4d58cdc42..2e9e058501a953 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2685,6 +2685,14 @@ int iommu_domain_set_attr(struct iommu_domain *domain,
>  }
>  EXPORT_SYMBOL_GPL(iommu_domain_set_attr);
>  
> +int iommu_domain_enable_nesting(struct iommu_domain *domain)
> +{
> +	if (!domain->ops->domain_enable_nesting)
> +		return -EINVAL;
> +	return domain->ops->domain_enable_nesting(domain);
> +}
> +EXPORT_SYMBOL_GPL(iommu_domain_enable_nesting);
> +
>  void iommu_get_resv_regions(struct device *dev, struct list_head *list)
>  {
>  	const struct iommu_ops *ops = dev->bus->iommu_ops;
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index c8e57f22f421c5..9cea4d80dd66ed 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -2320,10 +2320,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
>  	}
>  
>  	if (iommu->nesting) {
> -		int attr = 1;
> -
> -		ret = iommu_domain_set_attr(domain->domain, DOMAIN_ATTR_NESTING,
> -					    &attr);
> +		ret = iommu_domain_enable_nesting(domain->domain);
>  		if (ret)
>  			goto out_domain;
>  	}
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index f30de33c6ff56e..aed88aa3bd3edf 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -107,7 +107,6 @@ enum iommu_cap {
>   */
>  
>  enum iommu_attr {
> -	DOMAIN_ATTR_NESTING,	/* two stages of translation */
>  	DOMAIN_ATTR_IO_PGTABLE_CFG,
>  	DOMAIN_ATTR_MAX,
>  };
> @@ -196,6 +195,7 @@ struct iommu_iotlb_gather {
>   * @dma_use_flush_queue: Returns %true if a DMA flush queue is used
>   * @dma_enable_flush_queue: Try to enable the DMA flush queue
>   * @domain_set_attr: Change domain attributes
> + * @domain_enable_nesting: Enable nesting
>   * @get_resv_regions: Request list of reserved regions for a device
>   * @put_resv_regions: Free list of reserved regions for a device
>   * @apply_resv_region: Temporary helper call-back for iova reserved ranges
> @@ -248,6 +248,7 @@ struct iommu_ops {
>  	void (*dma_enable_flush_queue)(struct iommu_domain *domain);
>  	int (*domain_set_attr)(struct iommu_domain *domain,
>  			       enum iommu_attr attr, void *data);
> +	int (*domain_enable_nesting)(struct iommu_domain *domain);
>  
>  	/* Request/Free a list of reserved regions for a device */
>  	void (*get_resv_regions)(struct device *dev, struct list_head *list);
> @@ -494,6 +495,7 @@ extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *);
>  bool iommu_dma_use_flush_queue(struct iommu_domain *domain);
>  extern int iommu_domain_set_attr(struct iommu_domain *domain, enum iommu_attr,
>  				 void *data);
> +int iommu_domain_enable_nesting(struct iommu_domain *domain);
>  
>  extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
>  			      unsigned long iova, int flags);
> 

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 15/17] iommu: remove DOMAIN_ATTR_NESTING
       [not found]     ` <20210314155813.GA788@lst.de>
@ 2021-03-15  7:52       ` Auger Eric
  2021-03-25  6:12         ` Tian, Kevin
  0 siblings, 1 reply; 13+ messages in thread
From: Auger Eric @ 2021-03-15  7:52 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: kvm, Will Deacon, Joerg Roedel, linuxppc-dev, dri-devel, Li Yang,
	iommu, netdev, linux-arm-kernel, Michael Ellerman,
	virtualization, freedreno, David Woodhouse, linux-arm-msm

Hi Christoph,

On 3/14/21 4:58 PM, Christoph Hellwig wrote:
> On Sun, Mar 14, 2021 at 11:44:52AM +0100, Auger Eric wrote:
>> As mentionned by Robin, there are series planning to use
>> DOMAIN_ATTR_NESTING to get info about the nested caps of the iommu (ARM
>> and Intel):
>>
>> [Patch v8 00/10] vfio: expose virtual Shared Virtual Addressing to VMs
>> patches 1, 2, 3
>>
>> Is the plan to introduce a new domain_get_nesting_info ops then?
> 
> The plan as usual would be to add it the series adding that support.
> Not sure what the merge plans are - if the series is ready to be
> merged I could rebase on top of it, otherwise that series will need
> to add the method.
OK I think your series may be upstreamed first.

Thanks

Eric
> 

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 14/17] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE
       [not found]               ` <20210315083347.GA28445@lst.de>
@ 2021-03-16 13:03                 ` Robin Murphy
  0 siblings, 0 replies; 13+ messages in thread
From: Robin Murphy @ 2021-03-16 13:03 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: freedreno, kvm, Will Deacon, linux-arm-msm, dri-devel, Li Yang,
	iommu, Michael Ellerman, netdev, virtualization, linuxppc-dev,
	David Woodhouse, linux-arm-kernel

On 2021-03-15 08:33, Christoph Hellwig wrote:
> On Fri, Mar 12, 2021 at 04:18:24PM +0000, Robin Murphy wrote:
>>> Let me know what you think of the version here:
>>>
>>> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/iommu-cleanup
>>>
>>> I'll happily switch the patch to you as the author if you're fine with
>>> that as well.
>>
>> I still have reservations about removing the attribute API entirely and
>> pretending that io_pgtable_cfg is anything other than a SoC-specific
>> private interface,
> 
> I think a private inteface would make more sense.  For now I've just
> condensed it down to a generic set of quirk bits and dropped the
> attrs structure, which seems like an ok middle ground for now.  That
> being said I wonder why that quirk isn't simply set in the device
> tree?

Because it's a software policy decision rather than any inherent 
property of the platform, and the DT certainly doesn't know *when* any 
particular device might prefer its IOMMU to use cacheable pagetables to 
minimise TLB miss latency vs. saving the cache capacity for larger data 
buffers. It really is most logical to decide this at the driver level.

In truth the overall concept *is* relatively generic (a trend towards 
larger system caches and cleverer usage is about both raw performance 
and saving power on off-SoC DRAM traffic), it's just the particular 
implementation of using io-pgtable to set an outer-cacheable walk 
attribute in an SMMU TCR that's pretty much specific to Qualcomm SoCs. 
Hence why having a common abstraction at the iommu_domain level, but 
where the exact details are free to vary across different IOMMUs and 
their respective client drivers, is in many ways an ideal fit.

>> but the reworked patch on its own looks reasonable to
>> me, thanks! (I wasn't too convinced about the iommu_cmd_line wrappers
>> either...) Just iommu_get_dma_strict() needs an export since the SMMU
>> drivers can be modular - I consciously didn't add that myself since I was
>> mistakenly thinking only iommu-dma would call it.
> 
> Fixed.  Can I get your signoff for the patch?  Then I'll switch it to
> over to being attributed to you.

Sure - I would have thought that the one I originally posted still 
stands, but for the avoidance of doubt, for the parts of commit 
8b6d45c495bd in your tree that remain from what I wrote:

Signed-off-by: Robin Murphy <robin.murphy@arm.com>

Cheers,
Robin.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH 15/17] iommu: remove DOMAIN_ATTR_NESTING
  2021-03-15  7:52       ` Auger Eric
@ 2021-03-25  6:12         ` Tian, Kevin
  0 siblings, 0 replies; 13+ messages in thread
From: Tian, Kevin @ 2021-03-25  6:12 UTC (permalink / raw)
  To: Auger Eric, Christoph Hellwig
  Cc: freedreno, Liu, Yi L, kvm, Will Deacon, linux-arm-msm, dri-devel,
	Li Yang, iommu, Michael Ellerman, netdev, virtualization,
	linuxppc-dev, David Woodhouse, linux-arm-kernel

> From: Auger Eric
> Sent: Monday, March 15, 2021 3:52 PM
> To: Christoph Hellwig <hch@lst.de>
> Cc: kvm@vger.kernel.org; Will Deacon <will@kernel.org>; linuxppc-
> dev@lists.ozlabs.org; dri-devel@lists.freedesktop.org; Li Yang
> <leoyang.li@nxp.com>; iommu@lists.linux-foundation.org;
> 
> Hi Christoph,
> 
> On 3/14/21 4:58 PM, Christoph Hellwig wrote:
> > On Sun, Mar 14, 2021 at 11:44:52AM +0100, Auger Eric wrote:
> >> As mentionned by Robin, there are series planning to use
> >> DOMAIN_ATTR_NESTING to get info about the nested caps of the iommu
> (ARM
> >> and Intel):
> >>
> >> [Patch v8 00/10] vfio: expose virtual Shared Virtual Addressing to VMs
> >> patches 1, 2, 3
> >>
> >> Is the plan to introduce a new domain_get_nesting_info ops then?
> >
> > The plan as usual would be to add it the series adding that support.
> > Not sure what the merge plans are - if the series is ready to be
> > merged I could rebase on top of it, otherwise that series will need
> > to add the method.
> OK I think your series may be upstreamed first.
> 

Agree. The vSVA series is still undergoing a refactor according to Jason's
comment thus won't be ready in short term. It's better to let this one
go in first.

Thanks
Kevin
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-03-25  6:12 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20210301084257.945454-1-hch@lst.de>
2021-03-04 11:10 ` cleanup unused or almost unused IOMMU APIs and the FSL PAMU driver Joerg Roedel
     [not found] ` <20210301084257.945454-15-hch@lst.de>
2021-03-04 15:25   ` [PATCH 14/17] iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE Robin Murphy
     [not found]     ` <20210310091501.GC5928@lst.de>
     [not found]       ` <20210310092533.GA6819@lst.de>
2021-03-10 18:39         ` Robin Murphy
     [not found]           ` <20210311082609.GA6990@lst.de>
2021-03-12 16:18             ` Robin Murphy
     [not found]               ` <20210315083347.GA28445@lst.de>
2021-03-16 13:03                 ` Robin Murphy
     [not found] ` <20210301084257.945454-17-hch@lst.de>
2021-03-04 15:48   ` [PATCH 16/17] iommu: remove DOMAIN_ATTR_IO_PGTABLE_CFG Robin Murphy
2021-03-04 23:11     ` [Freedreno] " Rob Clark
2021-03-05 10:00       ` Will Deacon
     [not found]         ` <20210310085806.GB5928@lst.de>
2021-03-10 10:55           ` Will Deacon
2021-03-06  0:03 ` cleanup unused or almost unused IOMMU APIs and the FSL PAMU driver Li Yang
     [not found] ` <20210301084257.945454-16-hch@lst.de>
2021-03-14 10:44   ` [PATCH 15/17] iommu: remove DOMAIN_ATTR_NESTING Auger Eric
     [not found]     ` <20210314155813.GA788@lst.de>
2021-03-15  7:52       ` Auger Eric
2021-03-25  6:12         ` Tian, Kevin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).