All of lore.kernel.org
 help / color / mirror / Atom feed
From: Lu Baolu <baolu.lu@linux.intel.com>
To: Georgi Djakov <quic_c_gdjako@quicinc.com>,
	will@kernel.org, robin.murphy@arm.com
Cc: baolu.lu@linux.intel.com, joro@8bytes.org, isaacm@codeaurora.org,
	pratikp@codeaurora.org, iommu@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, djakov@kernel.org
Subject: Re: [PATCH v7 07/15] iommu: Hook up '->unmap_pages' driver callback
Date: Thu, 17 Jun 2021 15:18:03 +0800	[thread overview]
Message-ID: <0cb188c0-defd-e179-ad0e-471f48dfb54e@linux.intel.com> (raw)
In-Reply-To: <1623850736-389584-8-git-send-email-quic_c_gdjako@quicinc.com>

On 6/16/21 9:38 PM, Georgi Djakov wrote:
> From: Will Deacon <will@kernel.org>
> 
> Extend iommu_pgsize() to populate an optional 'count' parameter so that
> we can direct unmapping operation to the ->unmap_pages callback if it
> has been provided by the driver.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
> Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com>
> ---
>   drivers/iommu/iommu.c | 59 +++++++++++++++++++++++++++++++++++++++++++--------
>   1 file changed, 50 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 80e14c139d40..725622c7e603 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2376,11 +2376,11 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
>   EXPORT_SYMBOL_GPL(iommu_iova_to_phys);
>   
>   static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
> -			   phys_addr_t paddr, size_t size)
> +			   phys_addr_t paddr, size_t size, size_t *count)
>   {
> -	unsigned int pgsize_idx;
> +	unsigned int pgsize_idx, pgsize_idx_next;
>   	unsigned long pgsizes;
> -	size_t pgsize;
> +	size_t offset, pgsize, pgsize_next;
>   	unsigned long addr_merge = paddr | iova;
>   
>   	/* Page sizes supported by the hardware and small enough for @size */
> @@ -2396,7 +2396,36 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
>   	/* Pick the biggest page size remaining */
>   	pgsize_idx = __fls(pgsizes);
>   	pgsize = BIT(pgsize_idx);
> +	if (!count)
> +		return pgsize;
>   
> +	/* Find the next biggest support page size, if it exists */
> +	pgsizes = domain->pgsize_bitmap & ~GENMASK(pgsize_idx, 0);
> +	if (!pgsizes)
> +		goto out_set_count;
> +
> +	pgsize_idx_next = __ffs(pgsizes);
> +	pgsize_next = BIT(pgsize_idx_next);
> +
> +	/*
> +	 * There's no point trying a bigger page size unless the virtual
> +	 * and physical addresses are similarly offset within the larger page.
> +	 */
> +	if ((iova ^ paddr) & (pgsize_next - 1))
> +		goto out_set_count;
> +
> +	/* Calculate the offset to the next page size alignment boundary */
> +	offset = pgsize_next - (addr_merge & (pgsize_next - 1));
> +
> +	/*
> +	 * If size is big enough to accommodate the larger page, reduce
> +	 * the number of smaller pages.
> +	 */
> +	if (offset + pgsize_next <= size)
> +		size = offset;
> +
> +out_set_count:
> +	*count = size >> pgsize_idx;
>   	return pgsize;
>   }
>   
> @@ -2434,7 +2463,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
>   	pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
>   
>   	while (size) {
> -		size_t pgsize = iommu_pgsize(domain, iova, paddr, size);
> +		size_t pgsize = iommu_pgsize(domain, iova, paddr, size, NULL);
>   
>   		pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
>   			 iova, &paddr, pgsize);
> @@ -2485,6 +2514,19 @@ int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
>   }
>   EXPORT_SYMBOL_GPL(iommu_map_atomic);
>   
> +static size_t __iommu_unmap_pages(struct iommu_domain *domain,
> +				  unsigned long iova, size_t size,
> +				  struct iommu_iotlb_gather *iotlb_gather)
> +{
> +	const struct iommu_ops *ops = domain->ops;
> +	size_t pgsize, count;
> +
> +	pgsize = iommu_pgsize(domain, iova, iova, size, &count);
> +	return ops->unmap_pages ?
> +	       ops->unmap_pages(domain, iova, pgsize, count, iotlb_gather) :
> +	       ops->unmap(domain, iova, pgsize, iotlb_gather);
> +}
> +
>   static size_t __iommu_unmap(struct iommu_domain *domain,
>   			    unsigned long iova, size_t size,
>   			    struct iommu_iotlb_gather *iotlb_gather)
> @@ -2494,7 +2536,7 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
>   	unsigned long orig_iova = iova;
>   	unsigned int min_pagesz;
>   
> -	if (unlikely(ops->unmap == NULL ||
> +	if (unlikely(!(ops->unmap || ops->unmap_pages) ||
>   		     domain->pgsize_bitmap == 0UL))
>   		return 0;
>   
> @@ -2522,10 +2564,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
>   	 * or we hit an area that isn't mapped.
>   	 */
>   	while (unmapped < size) {
> -		size_t pgsize;
> -
> -		pgsize = iommu_pgsize(domain, iova, iova, size - unmapped);
> -		unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather);
> +		unmapped_page = __iommu_unmap_pages(domain, iova,
> +						    size - unmapped,
> +						    iotlb_gather);
>   		if (!unmapped_page)
>   			break;
>   
> 

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
baolu

WARNING: multiple messages have this Message-ID (diff)
From: Lu Baolu <baolu.lu@linux.intel.com>
To: Georgi Djakov <quic_c_gdjako@quicinc.com>,
	will@kernel.org, robin.murphy@arm.com
Cc: isaacm@codeaurora.org, linux-kernel@vger.kernel.org,
	iommu@lists.linux-foundation.org, djakov@kernel.org,
	pratikp@codeaurora.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v7 07/15] iommu: Hook up '->unmap_pages' driver callback
Date: Thu, 17 Jun 2021 15:18:03 +0800	[thread overview]
Message-ID: <0cb188c0-defd-e179-ad0e-471f48dfb54e@linux.intel.com> (raw)
In-Reply-To: <1623850736-389584-8-git-send-email-quic_c_gdjako@quicinc.com>

On 6/16/21 9:38 PM, Georgi Djakov wrote:
> From: Will Deacon <will@kernel.org>
> 
> Extend iommu_pgsize() to populate an optional 'count' parameter so that
> we can direct unmapping operation to the ->unmap_pages callback if it
> has been provided by the driver.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
> Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com>
> ---
>   drivers/iommu/iommu.c | 59 +++++++++++++++++++++++++++++++++++++++++++--------
>   1 file changed, 50 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 80e14c139d40..725622c7e603 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2376,11 +2376,11 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
>   EXPORT_SYMBOL_GPL(iommu_iova_to_phys);
>   
>   static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
> -			   phys_addr_t paddr, size_t size)
> +			   phys_addr_t paddr, size_t size, size_t *count)
>   {
> -	unsigned int pgsize_idx;
> +	unsigned int pgsize_idx, pgsize_idx_next;
>   	unsigned long pgsizes;
> -	size_t pgsize;
> +	size_t offset, pgsize, pgsize_next;
>   	unsigned long addr_merge = paddr | iova;
>   
>   	/* Page sizes supported by the hardware and small enough for @size */
> @@ -2396,7 +2396,36 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
>   	/* Pick the biggest page size remaining */
>   	pgsize_idx = __fls(pgsizes);
>   	pgsize = BIT(pgsize_idx);
> +	if (!count)
> +		return pgsize;
>   
> +	/* Find the next biggest support page size, if it exists */
> +	pgsizes = domain->pgsize_bitmap & ~GENMASK(pgsize_idx, 0);
> +	if (!pgsizes)
> +		goto out_set_count;
> +
> +	pgsize_idx_next = __ffs(pgsizes);
> +	pgsize_next = BIT(pgsize_idx_next);
> +
> +	/*
> +	 * There's no point trying a bigger page size unless the virtual
> +	 * and physical addresses are similarly offset within the larger page.
> +	 */
> +	if ((iova ^ paddr) & (pgsize_next - 1))
> +		goto out_set_count;
> +
> +	/* Calculate the offset to the next page size alignment boundary */
> +	offset = pgsize_next - (addr_merge & (pgsize_next - 1));
> +
> +	/*
> +	 * If size is big enough to accommodate the larger page, reduce
> +	 * the number of smaller pages.
> +	 */
> +	if (offset + pgsize_next <= size)
> +		size = offset;
> +
> +out_set_count:
> +	*count = size >> pgsize_idx;
>   	return pgsize;
>   }
>   
> @@ -2434,7 +2463,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
>   	pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
>   
>   	while (size) {
> -		size_t pgsize = iommu_pgsize(domain, iova, paddr, size);
> +		size_t pgsize = iommu_pgsize(domain, iova, paddr, size, NULL);
>   
>   		pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
>   			 iova, &paddr, pgsize);
> @@ -2485,6 +2514,19 @@ int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
>   }
>   EXPORT_SYMBOL_GPL(iommu_map_atomic);
>   
> +static size_t __iommu_unmap_pages(struct iommu_domain *domain,
> +				  unsigned long iova, size_t size,
> +				  struct iommu_iotlb_gather *iotlb_gather)
> +{
> +	const struct iommu_ops *ops = domain->ops;
> +	size_t pgsize, count;
> +
> +	pgsize = iommu_pgsize(domain, iova, iova, size, &count);
> +	return ops->unmap_pages ?
> +	       ops->unmap_pages(domain, iova, pgsize, count, iotlb_gather) :
> +	       ops->unmap(domain, iova, pgsize, iotlb_gather);
> +}
> +
>   static size_t __iommu_unmap(struct iommu_domain *domain,
>   			    unsigned long iova, size_t size,
>   			    struct iommu_iotlb_gather *iotlb_gather)
> @@ -2494,7 +2536,7 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
>   	unsigned long orig_iova = iova;
>   	unsigned int min_pagesz;
>   
> -	if (unlikely(ops->unmap == NULL ||
> +	if (unlikely(!(ops->unmap || ops->unmap_pages) ||
>   		     domain->pgsize_bitmap == 0UL))
>   		return 0;
>   
> @@ -2522,10 +2564,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
>   	 * or we hit an area that isn't mapped.
>   	 */
>   	while (unmapped < size) {
> -		size_t pgsize;
> -
> -		pgsize = iommu_pgsize(domain, iova, iova, size - unmapped);
> -		unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather);
> +		unmapped_page = __iommu_unmap_pages(domain, iova,
> +						    size - unmapped,
> +						    iotlb_gather);
>   		if (!unmapped_page)
>   			break;
>   
> 

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Lu Baolu <baolu.lu@linux.intel.com>
To: Georgi Djakov <quic_c_gdjako@quicinc.com>,
	will@kernel.org, robin.murphy@arm.com
Cc: baolu.lu@linux.intel.com, joro@8bytes.org, isaacm@codeaurora.org,
	pratikp@codeaurora.org, iommu@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, djakov@kernel.org
Subject: Re: [PATCH v7 07/15] iommu: Hook up '->unmap_pages' driver callback
Date: Thu, 17 Jun 2021 15:18:03 +0800	[thread overview]
Message-ID: <0cb188c0-defd-e179-ad0e-471f48dfb54e@linux.intel.com> (raw)
In-Reply-To: <1623850736-389584-8-git-send-email-quic_c_gdjako@quicinc.com>

On 6/16/21 9:38 PM, Georgi Djakov wrote:
> From: Will Deacon <will@kernel.org>
> 
> Extend iommu_pgsize() to populate an optional 'count' parameter so that
> we can direct unmapping operation to the ->unmap_pages callback if it
> has been provided by the driver.
> 
> Signed-off-by: Will Deacon <will@kernel.org>
> Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
> Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com>
> ---
>   drivers/iommu/iommu.c | 59 +++++++++++++++++++++++++++++++++++++++++++--------
>   1 file changed, 50 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 80e14c139d40..725622c7e603 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2376,11 +2376,11 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
>   EXPORT_SYMBOL_GPL(iommu_iova_to_phys);
>   
>   static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
> -			   phys_addr_t paddr, size_t size)
> +			   phys_addr_t paddr, size_t size, size_t *count)
>   {
> -	unsigned int pgsize_idx;
> +	unsigned int pgsize_idx, pgsize_idx_next;
>   	unsigned long pgsizes;
> -	size_t pgsize;
> +	size_t offset, pgsize, pgsize_next;
>   	unsigned long addr_merge = paddr | iova;
>   
>   	/* Page sizes supported by the hardware and small enough for @size */
> @@ -2396,7 +2396,36 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
>   	/* Pick the biggest page size remaining */
>   	pgsize_idx = __fls(pgsizes);
>   	pgsize = BIT(pgsize_idx);
> +	if (!count)
> +		return pgsize;
>   
> +	/* Find the next biggest support page size, if it exists */
> +	pgsizes = domain->pgsize_bitmap & ~GENMASK(pgsize_idx, 0);
> +	if (!pgsizes)
> +		goto out_set_count;
> +
> +	pgsize_idx_next = __ffs(pgsizes);
> +	pgsize_next = BIT(pgsize_idx_next);
> +
> +	/*
> +	 * There's no point trying a bigger page size unless the virtual
> +	 * and physical addresses are similarly offset within the larger page.
> +	 */
> +	if ((iova ^ paddr) & (pgsize_next - 1))
> +		goto out_set_count;
> +
> +	/* Calculate the offset to the next page size alignment boundary */
> +	offset = pgsize_next - (addr_merge & (pgsize_next - 1));
> +
> +	/*
> +	 * If size is big enough to accommodate the larger page, reduce
> +	 * the number of smaller pages.
> +	 */
> +	if (offset + pgsize_next <= size)
> +		size = offset;
> +
> +out_set_count:
> +	*count = size >> pgsize_idx;
>   	return pgsize;
>   }
>   
> @@ -2434,7 +2463,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
>   	pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
>   
>   	while (size) {
> -		size_t pgsize = iommu_pgsize(domain, iova, paddr, size);
> +		size_t pgsize = iommu_pgsize(domain, iova, paddr, size, NULL);
>   
>   		pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n",
>   			 iova, &paddr, pgsize);
> @@ -2485,6 +2514,19 @@ int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
>   }
>   EXPORT_SYMBOL_GPL(iommu_map_atomic);
>   
> +static size_t __iommu_unmap_pages(struct iommu_domain *domain,
> +				  unsigned long iova, size_t size,
> +				  struct iommu_iotlb_gather *iotlb_gather)
> +{
> +	const struct iommu_ops *ops = domain->ops;
> +	size_t pgsize, count;
> +
> +	pgsize = iommu_pgsize(domain, iova, iova, size, &count);
> +	return ops->unmap_pages ?
> +	       ops->unmap_pages(domain, iova, pgsize, count, iotlb_gather) :
> +	       ops->unmap(domain, iova, pgsize, iotlb_gather);
> +}
> +
>   static size_t __iommu_unmap(struct iommu_domain *domain,
>   			    unsigned long iova, size_t size,
>   			    struct iommu_iotlb_gather *iotlb_gather)
> @@ -2494,7 +2536,7 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
>   	unsigned long orig_iova = iova;
>   	unsigned int min_pagesz;
>   
> -	if (unlikely(ops->unmap == NULL ||
> +	if (unlikely(!(ops->unmap || ops->unmap_pages) ||
>   		     domain->pgsize_bitmap == 0UL))
>   		return 0;
>   
> @@ -2522,10 +2564,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain,
>   	 * or we hit an area that isn't mapped.
>   	 */
>   	while (unmapped < size) {
> -		size_t pgsize;
> -
> -		pgsize = iommu_pgsize(domain, iova, iova, size - unmapped);
> -		unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather);
> +		unmapped_page = __iommu_unmap_pages(domain, iova,
> +						    size - unmapped,
> +						    iotlb_gather);
>   		if (!unmapped_page)
>   			break;
>   
> 

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
baolu

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-06-17  7:19 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-16 13:38 [PATCH v7 00/15] Optimizing iommu_[map/unmap] performance Georgi Djakov
2021-06-16 13:38 ` Georgi Djakov
2021-06-16 13:38 ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 01/15] iommu/io-pgtable: Introduce unmap_pages() as a page table op Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 02/15] iommu: Add an unmap_pages() op for IOMMU drivers Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 03/15] iommu/io-pgtable: Introduce map_pages() as a page table op Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 04/15] iommu: Add a map_pages() op for IOMMU drivers Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 05/15] iommu: Use bitmap to calculate page size in iommu_pgsize() Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-17  7:16   ` Lu Baolu
2021-06-17  7:16     ` Lu Baolu
2021-06-17  7:16     ` Lu Baolu
2021-06-16 13:38 ` [PATCH v7 06/15] iommu: Split 'addr_merge' argument to iommu_pgsize() into separate parts Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-17  7:17   ` Lu Baolu
2021-06-17  7:17     ` Lu Baolu
2021-06-17  7:17     ` Lu Baolu
2021-06-16 13:38 ` [PATCH v7 07/15] iommu: Hook up '->unmap_pages' driver callback Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-17  7:18   ` Lu Baolu [this message]
2021-06-17  7:18     ` Lu Baolu
2021-06-17  7:18     ` Lu Baolu
2021-06-16 13:38 ` [PATCH v7 08/15] iommu: Add support for the map_pages() callback Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-17  7:18   ` Lu Baolu
2021-06-17  7:18     ` Lu Baolu
2021-06-17  7:18     ` Lu Baolu
2021-06-16 13:38 ` [PATCH v7 09/15] iommu/io-pgtable-arm: Prepare PTE methods for handling multiple entries Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 10/15] iommu/io-pgtable-arm: Implement arm_lpae_unmap_pages() Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-07-15  9:31   ` Kunkun Jiang
2021-07-15  9:31     ` Kunkun Jiang
2021-07-15  9:31     ` Kunkun Jiang
2021-06-16 13:38 ` [PATCH v7 11/15] iommu/io-pgtable-arm: Implement arm_lpae_map_pages() Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 12/15] iommu/io-pgtable-arm-v7s: Implement arm_v7s_unmap_pages() Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 13/15] iommu/io-pgtable-arm-v7s: Implement arm_v7s_map_pages() Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 14/15] iommu/arm-smmu: Implement the unmap_pages() IOMMU driver callback Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38 ` [PATCH v7 15/15] iommu/arm-smmu: Implement the map_pages() " Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-06-16 13:38   ` Georgi Djakov
2021-07-14 14:24 ` [PATCH v7 00/15] Optimizing iommu_[map/unmap] performance Georgi Djakov
2021-07-14 14:24   ` Georgi Djakov
2021-07-14 14:24   ` Georgi Djakov
2021-07-15  1:23   ` Lu Baolu
2021-07-15  1:23     ` Lu Baolu
2021-07-15  1:23     ` Lu Baolu
2021-07-15  1:51     ` chenxiang (M)
2021-07-15  1:51       ` chenxiang (M)
2021-07-15  1:51       ` chenxiang (M)
2021-07-26 10:37 ` Joerg Roedel
2021-07-26 10:37   ` Joerg Roedel
2021-07-26 10:37   ` Joerg Roedel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0cb188c0-defd-e179-ad0e-471f48dfb54e@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=djakov@kernel.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=isaacm@codeaurora.org \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pratikp@codeaurora.org \
    --cc=quic_c_gdjako@quicinc.com \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.