All of lore.kernel.org
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Tom Murphy <tmurphy@arista.com>, iommu@lists.linux-foundation.org
Cc: dima@arista.com, jamessewart@arista.com, murphyt7@tcd.ie,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will.deacon@arm.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Kukjin Kim <kgene@kernel.org>,
	Krzysztof Kozlowski <krzk@kernel.org>,
	Matthias Brugger <matthias.bgg@gmail.com>,
	Rob Clark <robdclark@gmail.com>,
	Andy Gross <andy.gross@linaro.org>,
	David Brown <david.brown@linaro.org>,
	Heiko Stuebner <heiko@sntech.de>,
	Marc Zyngier <marc.zyngier@arm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-msm@vger.kernel.org,
	linux-rockchip@lists.infradead.org
Subject: Re: [PATCH 2/9] iommu/dma-iommu: Add function to flush any cached not present IOTLB entries
Date: Tue, 16 Apr 2019 15:00:59 +0100	[thread overview]
Message-ID: <82ce70dc-b370-3bb0-bce8-2d32db4d6a0d@arm.com> (raw)
In-Reply-To: <20190411184741.27540-3-tmurphy@arista.com>

On 11/04/2019 19:47, Tom Murphy wrote:
> Both the AMD and Intel drivers can cache not present IOTLB entries. To
> convert these drivers to the dma-iommu api we need a generic way to
> flush the NP cache. IOMMU drivers which have a NP cache can implement
> the .flush_np_cache function in the iommu ops struct. I will implement
> .flush_np_cache for both the Intel and AMD drivers in later patches.
> 
> The Intel np-cache is described here:
> https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdf#G7.66452
> 
> And the AMD np-cache is described here:
> https://developer.amd.com/wordpress/media/2012/10/34434-IOMMU-Rev_1.26_2-11-09.pdf#page=63

Callers expect that once iommu_map() returns successfully, the mapping 
exists and is ready to use - if these drivers aren't handling this 
flushing internally, how are they not already broken for e.g. VFIO?

> Signed-off-by: Tom Murphy <tmurphy@arista.com>
> ---
>   drivers/iommu/dma-iommu.c | 10 ++++++++++
>   include/linux/iommu.h     |  3 +++
>   2 files changed, 13 insertions(+)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 1a4bff3f8427..cc5da30d6e58 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -594,6 +594,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>   			< size)
>   		goto out_free_sg;
>   
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, size);
> +

This doesn't scale. At the very least, it should be internal to 
iommu_map() and exposed to be the responsibility of every external 
caller now and forever after.

That said, I've now gone and looked and AFAICS both the Intel and AMD 
drivers *do* appear to handle this in their iommu_ops::map callbacks 
already, so the whole patch does indeed seem bogus. What might be 
worthwhile, though, is seeing if there's scope to refactor those drivers 
to push some of it into an iommu_ops::iotlb_sync_map callback to 
optimise the flushing for multi-page mappings.

Robin.

>   	*handle = iova;
>   	sg_free_table(&sgt);
>   	return pages;
> @@ -652,6 +655,10 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>   		iommu_dma_free_iova(cookie, iova, size);
>   		return DMA_MAPPING_ERROR;
>   	}
> +
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, size);
> +
>   	return iova + iova_off;
>   }
>   
> @@ -812,6 +819,9 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len)
>   		goto out_free_iova;
>   
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, iova_len);
> +
>   	return __finalise_sg(dev, sg, nents, iova);
>   
>   out_free_iova:
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 75559918d9bd..47ff8d731d6a 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -173,6 +173,7 @@ struct iommu_resv_region {
>    * @iotlb_sync_map: Sync mappings created recently using @map to the hardware
>    * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
>    *            queue
> + * @flush_np_cache: Flush the non present entry cache
>    * @iova_to_phys: translate iova to physical address
>    * @add_device: add device to iommu grouping
>    * @remove_device: remove device from iommu grouping
> @@ -209,6 +210,8 @@ struct iommu_ops {
>   				unsigned long iova, size_t size);
>   	void (*iotlb_sync_map)(struct iommu_domain *domain);
>   	void (*iotlb_sync)(struct iommu_domain *domain);
> +	void (*flush_np_cache)(struct iommu_domain *domain,
> +				unsigned long iova, size_t size);
>   	phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
>   	int (*add_device)(struct device *dev);
>   	void (*remove_device)(struct device *dev);
> 

WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com>
To: Tom Murphy <tmurphy@arista.com>, iommu@lists.linux-foundation.org
Cc: David Brown <david.brown@linaro.org>,
	linux-samsung-soc@vger.kernel.org,
	Heiko Stuebner <heiko@sntech.de>,
	dima@arista.com, Marc Zyngier <marc.zyngier@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org, murphyt7@tcd.ie,
	Andy Gross <andy.gross@linaro.org>, Kukjin Kim <kgene@kernel.org>,
	linux-mediatek@lists.infradead.org,
	Krzysztof Kozlowski <krzk@kernel.org>,
	linux-arm-msm@vger.kernel.org,
	Matthias Brugger <matthias.bgg@gmail.com>,
	linux-rockchip@lists.infradead.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 2/9] iommu/dma-iommu: Add function to flush any cached not present IOTLB entries
Date: Tue, 16 Apr 2019 15:00:59 +0100	[thread overview]
Message-ID: <82ce70dc-b370-3bb0-bce8-2d32db4d6a0d@arm.com> (raw)
Message-ID: <20190416140059.g7DieN0b32z_2HURxAKu_gwBlrOr2ILYO285XHNK9rk@z> (raw)
In-Reply-To: <20190411184741.27540-3-tmurphy@arista.com>

On 11/04/2019 19:47, Tom Murphy wrote:
> Both the AMD and Intel drivers can cache not present IOTLB entries. To
> convert these drivers to the dma-iommu api we need a generic way to
> flush the NP cache. IOMMU drivers which have a NP cache can implement
> the .flush_np_cache function in the iommu ops struct. I will implement
> .flush_np_cache for both the Intel and AMD drivers in later patches.
> 
> The Intel np-cache is described here:
> https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdf#G7.66452
> 
> And the AMD np-cache is described here:
> https://developer.amd.com/wordpress/media/2012/10/34434-IOMMU-Rev_1.26_2-11-09.pdf#page=63

Callers expect that once iommu_map() returns successfully, the mapping 
exists and is ready to use - if these drivers aren't handling this 
flushing internally, how are they not already broken for e.g. VFIO?

> Signed-off-by: Tom Murphy <tmurphy@arista.com>
> ---
>   drivers/iommu/dma-iommu.c | 10 ++++++++++
>   include/linux/iommu.h     |  3 +++
>   2 files changed, 13 insertions(+)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 1a4bff3f8427..cc5da30d6e58 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -594,6 +594,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>   			< size)
>   		goto out_free_sg;
>   
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, size);
> +

This doesn't scale. At the very least, it should be internal to 
iommu_map() and exposed to be the responsibility of every external 
caller now and forever after.

That said, I've now gone and looked and AFAICS both the Intel and AMD 
drivers *do* appear to handle this in their iommu_ops::map callbacks 
already, so the whole patch does indeed seem bogus. What might be 
worthwhile, though, is seeing if there's scope to refactor those drivers 
to push some of it into an iommu_ops::iotlb_sync_map callback to 
optimise the flushing for multi-page mappings.

Robin.

>   	*handle = iova;
>   	sg_free_table(&sgt);
>   	return pages;
> @@ -652,6 +655,10 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>   		iommu_dma_free_iova(cookie, iova, size);
>   		return DMA_MAPPING_ERROR;
>   	}
> +
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, size);
> +
>   	return iova + iova_off;
>   }
>   
> @@ -812,6 +819,9 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len)
>   		goto out_free_iova;
>   
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, iova_len);
> +
>   	return __finalise_sg(dev, sg, nents, iova);
>   
>   out_free_iova:
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 75559918d9bd..47ff8d731d6a 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -173,6 +173,7 @@ struct iommu_resv_region {
>    * @iotlb_sync_map: Sync mappings created recently using @map to the hardware
>    * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
>    *            queue
> + * @flush_np_cache: Flush the non present entry cache
>    * @iova_to_phys: translate iova to physical address
>    * @add_device: add device to iommu grouping
>    * @remove_device: remove device from iommu grouping
> @@ -209,6 +210,8 @@ struct iommu_ops {
>   				unsigned long iova, size_t size);
>   	void (*iotlb_sync_map)(struct iommu_domain *domain);
>   	void (*iotlb_sync)(struct iommu_domain *domain);
> +	void (*flush_np_cache)(struct iommu_domain *domain,
> +				unsigned long iova, size_t size);
>   	phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
>   	int (*add_device)(struct device *dev);
>   	void (*remove_device)(struct device *dev);
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Robin Murphy <robin.murphy@arm.com>
To: Tom Murphy <tmurphy@arista.com>, iommu@lists.linux-foundation.org
Cc: David Brown <david.brown@linaro.org>,
	linux-samsung-soc@vger.kernel.org,
	Heiko Stuebner <heiko@sntech.de>,
	dima@arista.com, Marc Zyngier <marc.zyngier@arm.com>,
	jamessewart@arista.com, Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org, murphyt7@tcd.ie,
	Andy Gross <andy.gross@linaro.org>,
	Rob Clark <robdclark@gmail.com>, Kukjin Kim <kgene@kernel.org>,
	linux-mediatek@lists.infradead.org,
	Krzysztof Kozlowski <krzk@kernel.org>,
	linux-arm-msm@vger.kernel.org,
	Matthias Brugger <matthias.bgg@gmail.com>,
	linux-rockchip@lists.infradead.org,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-arm-kernel@lists.infradead.org,
	Marek Szyprowski <m.szyprowski@samsung.com>
Subject: Re: [PATCH 2/9] iommu/dma-iommu: Add function to flush any cached not present IOTLB entries
Date: Tue, 16 Apr 2019 15:00:59 +0100	[thread overview]
Message-ID: <82ce70dc-b370-3bb0-bce8-2d32db4d6a0d@arm.com> (raw)
In-Reply-To: <20190411184741.27540-3-tmurphy@arista.com>

On 11/04/2019 19:47, Tom Murphy wrote:
> Both the AMD and Intel drivers can cache not present IOTLB entries. To
> convert these drivers to the dma-iommu api we need a generic way to
> flush the NP cache. IOMMU drivers which have a NP cache can implement
> the .flush_np_cache function in the iommu ops struct. I will implement
> .flush_np_cache for both the Intel and AMD drivers in later patches.
> 
> The Intel np-cache is described here:
> https://software.intel.com/sites/default/files/managed/c5/15/vt-directed-io-spec.pdf#G7.66452
> 
> And the AMD np-cache is described here:
> https://developer.amd.com/wordpress/media/2012/10/34434-IOMMU-Rev_1.26_2-11-09.pdf#page=63

Callers expect that once iommu_map() returns successfully, the mapping 
exists and is ready to use - if these drivers aren't handling this 
flushing internally, how are they not already broken for e.g. VFIO?

> Signed-off-by: Tom Murphy <tmurphy@arista.com>
> ---
>   drivers/iommu/dma-iommu.c | 10 ++++++++++
>   include/linux/iommu.h     |  3 +++
>   2 files changed, 13 insertions(+)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 1a4bff3f8427..cc5da30d6e58 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -594,6 +594,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>   			< size)
>   		goto out_free_sg;
>   
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, size);
> +

This doesn't scale. At the very least, it should be internal to 
iommu_map() and exposed to be the responsibility of every external 
caller now and forever after.

That said, I've now gone and looked and AFAICS both the Intel and AMD 
drivers *do* appear to handle this in their iommu_ops::map callbacks 
already, so the whole patch does indeed seem bogus. What might be 
worthwhile, though, is seeing if there's scope to refactor those drivers 
to push some of it into an iommu_ops::iotlb_sync_map callback to 
optimise the flushing for multi-page mappings.

Robin.

>   	*handle = iova;
>   	sg_free_table(&sgt);
>   	return pages;
> @@ -652,6 +655,10 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>   		iommu_dma_free_iova(cookie, iova, size);
>   		return DMA_MAPPING_ERROR;
>   	}
> +
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, size);
> +
>   	return iova + iova_off;
>   }
>   
> @@ -812,6 +819,9 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len)
>   		goto out_free_iova;
>   
> +	if (domain->ops->flush_np_cache)
> +		domain->ops->flush_np_cache(domain, iova, iova_len);
> +
>   	return __finalise_sg(dev, sg, nents, iova);
>   
>   out_free_iova:
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 75559918d9bd..47ff8d731d6a 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -173,6 +173,7 @@ struct iommu_resv_region {
>    * @iotlb_sync_map: Sync mappings created recently using @map to the hardware
>    * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
>    *            queue
> + * @flush_np_cache: Flush the non present entry cache
>    * @iova_to_phys: translate iova to physical address
>    * @add_device: add device to iommu grouping
>    * @remove_device: remove device from iommu grouping
> @@ -209,6 +210,8 @@ struct iommu_ops {
>   				unsigned long iova, size_t size);
>   	void (*iotlb_sync_map)(struct iommu_domain *domain);
>   	void (*iotlb_sync)(struct iommu_domain *domain);
> +	void (*flush_np_cache)(struct iommu_domain *domain,
> +				unsigned long iova, size_t size);
>   	phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
>   	int (*add_device)(struct device *dev);
>   	void (*remove_device)(struct device *dev);
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2019-04-16 14:00 UTC|newest]

Thread overview: 89+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-11 18:47 [PATCH 0/9] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Tom Murphy
2019-04-11 18:47 ` Tom Murphy
2019-04-11 18:47 ` Tom Murphy via iommu
2019-04-11 18:47 ` Tom Murphy
2019-04-11 18:47 ` [PATCH 1/9] iommu/dma-iommu: Add iommu_map_atomic Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-2-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:30     ` Christoph Hellwig
2019-04-15  6:30       ` Christoph Hellwig
2019-04-15  6:30       ` Christoph Hellwig
2019-04-15  6:30       ` Christoph Hellwig
2019-04-11 18:47 ` [PATCH 2/9] iommu/dma-iommu: Add function to flush any cached not present IOTLB entries Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-16 14:00   ` Robin Murphy [this message]
2019-04-16 14:00     ` Robin Murphy
2019-04-16 14:00     ` Robin Murphy
     [not found]     ` <82ce70dc-b370-3bb0-bce8-2d32db4d6a0d-5wv7dgnIgG8@public.gmane.org>
2019-04-16 16:40       ` Tom Murphy via iommu
2019-04-16 16:40         ` Tom Murphy
2019-04-16 16:40         ` Tom Murphy via iommu
2019-04-16 16:40         ` Tom Murphy
2019-04-11 18:47 ` [PATCH 3/9] iommu/dma-iommu: Add iommu_dma_copy_reserved_iova, iommu_dma_apply_resv_region to the dma-iommu api Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-4-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:31     ` Christoph Hellwig
2019-04-15  6:31       ` Christoph Hellwig
2019-04-15  6:31       ` Christoph Hellwig
2019-04-15  6:31       ` Christoph Hellwig
2019-04-16 13:22       ` Tom Murphy
2019-04-16 13:22         ` Tom Murphy
2019-04-16 13:22         ` Tom Murphy via iommu
2019-04-16 13:37         ` Robin Murphy
2019-04-16 13:37           ` Robin Murphy
2019-04-16 13:37           ` Robin Murphy
2019-04-11 18:47 ` [PATCH 4/9] iommu/dma-iommu: Add iommu_dma_map_page_coherent Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-15  6:33   ` Christoph Hellwig
2019-04-15  6:33     ` Christoph Hellwig
2019-04-15  6:33     ` Christoph Hellwig
2019-04-11 18:47 ` [PATCH 5/9] iommu/amd: Implement .flush_np_cache Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-6-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:33     ` Christoph Hellwig
2019-04-15  6:33       ` Christoph Hellwig
2019-04-15  6:33       ` Christoph Hellwig
2019-04-15  6:33       ` Christoph Hellwig
2019-04-15 18:18       ` Tom Murphy
2019-04-15 18:18         ` Tom Murphy
2019-04-15 18:18         ` Tom Murphy via iommu
2019-04-11 18:47 ` [PATCH 6/9] iommu/amd: Implement map_atomic Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-16 14:13   ` Robin Murphy
2019-04-16 14:13     ` Robin Murphy
2019-04-16 14:13     ` Robin Murphy
2019-04-11 18:47 ` [PATCH 7/9] iommu/amd: Use the dma-iommu api Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47 ` [PATCH 8/9] iommu/amd: Clean up unused functions Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-9-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:22     ` Christoph Hellwig
2019-04-15  6:22       ` Christoph Hellwig
2019-04-15  6:22       ` Christoph Hellwig
2019-04-15  6:22       ` Christoph Hellwig
2019-04-11 18:47 ` [PATCH 9/9] iommu/amd: Add allocated domain to global list earlier Tom Murphy
2019-04-11 18:47   ` Tom Murphy
2019-04-11 18:47   ` Tom Murphy via iommu
2019-04-11 18:47   ` Tom Murphy
     [not found]   ` <20190411184741.27540-10-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:23     ` Christoph Hellwig
2019-04-15  6:23       ` Christoph Hellwig
2019-04-15  6:23       ` Christoph Hellwig
2019-04-15  6:23       ` Christoph Hellwig
2019-04-15 18:06       ` Tom Murphy
2019-04-15 18:06         ` Tom Murphy
2019-04-15 18:06         ` Tom Murphy via iommu
     [not found] ` <20190411184741.27540-1-tmurphy-nzgTgzXrdUbQT0dZR+AlfA@public.gmane.org>
2019-04-15  6:18   ` [PATCH 0/9] iommu/amd: Convert the AMD iommu driver to the dma-iommu api Christoph Hellwig
2019-04-15  6:18     ` Christoph Hellwig
2019-04-15  6:18     ` Christoph Hellwig
2019-04-15  6:18     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=82ce70dc-b370-3bb0-bce8-2d32db4d6a0d@arm.com \
    --to=robin.murphy@arm.com \
    --cc=andy.gross@linaro.org \
    --cc=david.brown@linaro.org \
    --cc=dima@arista.com \
    --cc=heiko@sntech.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jamessewart@arista.com \
    --cc=joro@8bytes.org \
    --cc=kgene@kernel.org \
    --cc=krzk@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-rockchip@lists.infradead.org \
    --cc=linux-samsung-soc@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=marc.zyngier@arm.com \
    --cc=matthias.bgg@gmail.com \
    --cc=murphyt7@tcd.ie \
    --cc=robdclark@gmail.com \
    --cc=tglx@linutronix.de \
    --cc=tmurphy@arista.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.