* [PATCH v1 0/2] iommu/vt-d: boost the mapping process
@ 2021-09-15 15:21 ` Longpeng(Mike)
0 siblings, 0 replies; 8+ messages in thread
From: Longpeng(Mike) @ 2021-09-15 15:21 UTC (permalink / raw)
To: dwmw2, baolu.lu, joro, will
Cc: iommu, linux-kernel, arei.gonglei, Longpeng(Mike)
Hi guys,
We found that the __domain_mapping() would take too long when
the memory region is too large, we try to make it faster in this
patchset. The performance number can be found in PATCH 2, please
review when you free, thanks.
Longpeng(Mike) (2):
iommu/vt-d: convert the return type of first_pte_in_page to bool
iommu/vt-d: avoid duplicated removing in __domain_mapping
drivers/iommu/intel/iommu.c | 12 +++++++-----
include/linux/intel-iommu.h | 8 +++++++-
2 files changed, 14 insertions(+), 6 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v1 0/2] iommu/vt-d: boost the mapping process
@ 2021-09-15 15:21 ` Longpeng(Mike)
0 siblings, 0 replies; 8+ messages in thread
From: Longpeng(Mike) @ 2021-09-15 15:21 UTC (permalink / raw)
To: dwmw2, baolu.lu, joro, will
Cc: Longpeng(Mike), iommu, arei.gonglei, linux-kernel
Hi guys,
We found that the __domain_mapping() would take too long when
the memory region is too large, we try to make it faster in this
patchset. The performance number can be found in PATCH 2, please
review when you free, thanks.
Longpeng(Mike) (2):
iommu/vt-d: convert the return type of first_pte_in_page to bool
iommu/vt-d: avoid duplicated removing in __domain_mapping
drivers/iommu/intel/iommu.c | 12 +++++++-----
include/linux/intel-iommu.h | 8 +++++++-
2 files changed, 14 insertions(+), 6 deletions(-)
--
1.8.3.1
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v1 1/2] iommu/vt-d: convert the return type of first_pte_in_page to bool
2021-09-15 15:21 ` Longpeng(Mike)
@ 2021-09-15 15:21 ` Longpeng(Mike)
-1 siblings, 0 replies; 8+ messages in thread
From: Longpeng(Mike) @ 2021-09-15 15:21 UTC (permalink / raw)
To: dwmw2, baolu.lu, joro, will
Cc: iommu, linux-kernel, arei.gonglei, Longpeng(Mike)
first_pte_in_page() returns boolean value, so let's convert its
return type to bool.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
include/linux/intel-iommu.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 05a65eb..a590b00 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -708,7 +708,7 @@ static inline bool dma_pte_superpage(struct dma_pte *pte)
return (pte->val & DMA_PTE_LARGE_PAGE);
}
-static inline int first_pte_in_page(struct dma_pte *pte)
+static inline bool first_pte_in_page(struct dma_pte *pte)
{
return !((unsigned long)pte & ~VTD_PAGE_MASK);
}
--
1.8.3.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v1 1/2] iommu/vt-d: convert the return type of first_pte_in_page to bool
@ 2021-09-15 15:21 ` Longpeng(Mike)
0 siblings, 0 replies; 8+ messages in thread
From: Longpeng(Mike) @ 2021-09-15 15:21 UTC (permalink / raw)
To: dwmw2, baolu.lu, joro, will
Cc: Longpeng(Mike), iommu, arei.gonglei, linux-kernel
first_pte_in_page() returns boolean value, so let's convert its
return type to bool.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
include/linux/intel-iommu.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 05a65eb..a590b00 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -708,7 +708,7 @@ static inline bool dma_pte_superpage(struct dma_pte *pte)
return (pte->val & DMA_PTE_LARGE_PAGE);
}
-static inline int first_pte_in_page(struct dma_pte *pte)
+static inline bool first_pte_in_page(struct dma_pte *pte)
{
return !((unsigned long)pte & ~VTD_PAGE_MASK);
}
--
1.8.3.1
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v1 2/2] iommu/vt-d: avoid duplicated removing in __domain_mapping
2021-09-15 15:21 ` Longpeng(Mike)
@ 2021-09-15 15:21 ` Longpeng(Mike)
-1 siblings, 0 replies; 8+ messages in thread
From: Longpeng(Mike) @ 2021-09-15 15:21 UTC (permalink / raw)
To: dwmw2, baolu.lu, joro, will
Cc: iommu, linux-kernel, arei.gonglei, Longpeng(Mike)
__domain_mapping() always removes the pages in the range from
'iov_pfn' to 'end_pfn', but the 'end_pfn' is always the last pfn
of the range that the caller wants to map.
This would introduce too many duplicated removing and leads the
map operation take too long, for example:
Map iova=0x100000,nr_pages=0x7d61800
iov_pfn: 0x100000, end_pfn: 0x7e617ff
iov_pfn: 0x140000, end_pfn: 0x7e617ff
iov_pfn: 0x180000, end_pfn: 0x7e617ff
iov_pfn: 0x1c0000, end_pfn: 0x7e617ff
iov_pfn: 0x200000, end_pfn: 0x7e617ff
...
it takes about 50ms in total.
We can reduce the cost by recalculate the 'end_pfn' and limit it
to the boundary of the end of the pte page.
Map iova=0x100000,nr_pages=0x7d61800
iov_pfn: 0x100000, end_pfn: 0x13ffff
iov_pfn: 0x140000, end_pfn: 0x17ffff
iov_pfn: 0x180000, end_pfn: 0x1bffff
iov_pfn: 0x1c0000, end_pfn: 0x1fffff
iov_pfn: 0x200000, end_pfn: 0x23ffff
...
it only need 9ms now.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
drivers/iommu/intel/iommu.c | 12 +++++++-----
include/linux/intel-iommu.h | 6 ++++++
2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index d75f59a..87cbf34 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -2354,12 +2354,18 @@ static void switch_to_super_page(struct dmar_domain *domain,
return -ENOMEM;
first_pte = pte;
+ lvl_pages = lvl_to_nr_pages(largepage_lvl);
+ BUG_ON(nr_pages < lvl_pages);
+
/* It is large page*/
if (largepage_lvl > 1) {
unsigned long end_pfn;
+ unsigned long pages_to_remove;
pteval |= DMA_PTE_LARGE_PAGE;
- end_pfn = ((iov_pfn + nr_pages) & level_mask(largepage_lvl)) - 1;
+ pages_to_remove = min_t(unsigned long, nr_pages,
+ nr_pte_to_next_page(pte) * lvl_pages);
+ end_pfn = iov_pfn + pages_to_remove - 1;
switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);
} else {
pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
@@ -2381,10 +2387,6 @@ static void switch_to_super_page(struct dmar_domain *domain,
WARN_ON(1);
}
- lvl_pages = lvl_to_nr_pages(largepage_lvl);
-
- BUG_ON(nr_pages < lvl_pages);
-
nr_pages -= lvl_pages;
iov_pfn += lvl_pages;
phys_pfn += lvl_pages;
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index a590b00..4bff70c 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -713,6 +713,12 @@ static inline bool first_pte_in_page(struct dma_pte *pte)
return !((unsigned long)pte & ~VTD_PAGE_MASK);
}
+static inline int nr_pte_to_next_page(struct dma_pte *pte)
+{
+ return first_pte_in_page(pte) ? BIT_ULL(VTD_STRIDE_SHIFT) :
+ (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
+}
+
extern struct dmar_drhd_unit * dmar_find_matched_drhd_unit(struct pci_dev *dev);
extern int dmar_find_matched_atsr_unit(struct pci_dev *dev);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v1 2/2] iommu/vt-d: avoid duplicated removing in __domain_mapping
@ 2021-09-15 15:21 ` Longpeng(Mike)
0 siblings, 0 replies; 8+ messages in thread
From: Longpeng(Mike) @ 2021-09-15 15:21 UTC (permalink / raw)
To: dwmw2, baolu.lu, joro, will
Cc: Longpeng(Mike), iommu, arei.gonglei, linux-kernel
__domain_mapping() always removes the pages in the range from
'iov_pfn' to 'end_pfn', but the 'end_pfn' is always the last pfn
of the range that the caller wants to map.
This would introduce too many duplicated removing and leads the
map operation take too long, for example:
Map iova=0x100000,nr_pages=0x7d61800
iov_pfn: 0x100000, end_pfn: 0x7e617ff
iov_pfn: 0x140000, end_pfn: 0x7e617ff
iov_pfn: 0x180000, end_pfn: 0x7e617ff
iov_pfn: 0x1c0000, end_pfn: 0x7e617ff
iov_pfn: 0x200000, end_pfn: 0x7e617ff
...
it takes about 50ms in total.
We can reduce the cost by recalculate the 'end_pfn' and limit it
to the boundary of the end of the pte page.
Map iova=0x100000,nr_pages=0x7d61800
iov_pfn: 0x100000, end_pfn: 0x13ffff
iov_pfn: 0x140000, end_pfn: 0x17ffff
iov_pfn: 0x180000, end_pfn: 0x1bffff
iov_pfn: 0x1c0000, end_pfn: 0x1fffff
iov_pfn: 0x200000, end_pfn: 0x23ffff
...
it only need 9ms now.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
drivers/iommu/intel/iommu.c | 12 +++++++-----
include/linux/intel-iommu.h | 6 ++++++
2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index d75f59a..87cbf34 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -2354,12 +2354,18 @@ static void switch_to_super_page(struct dmar_domain *domain,
return -ENOMEM;
first_pte = pte;
+ lvl_pages = lvl_to_nr_pages(largepage_lvl);
+ BUG_ON(nr_pages < lvl_pages);
+
/* It is large page*/
if (largepage_lvl > 1) {
unsigned long end_pfn;
+ unsigned long pages_to_remove;
pteval |= DMA_PTE_LARGE_PAGE;
- end_pfn = ((iov_pfn + nr_pages) & level_mask(largepage_lvl)) - 1;
+ pages_to_remove = min_t(unsigned long, nr_pages,
+ nr_pte_to_next_page(pte) * lvl_pages);
+ end_pfn = iov_pfn + pages_to_remove - 1;
switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);
} else {
pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
@@ -2381,10 +2387,6 @@ static void switch_to_super_page(struct dmar_domain *domain,
WARN_ON(1);
}
- lvl_pages = lvl_to_nr_pages(largepage_lvl);
-
- BUG_ON(nr_pages < lvl_pages);
-
nr_pages -= lvl_pages;
iov_pfn += lvl_pages;
phys_pfn += lvl_pages;
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index a590b00..4bff70c 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -713,6 +713,12 @@ static inline bool first_pte_in_page(struct dma_pte *pte)
return !((unsigned long)pte & ~VTD_PAGE_MASK);
}
+static inline int nr_pte_to_next_page(struct dma_pte *pte)
+{
+ return first_pte_in_page(pte) ? BIT_ULL(VTD_STRIDE_SHIFT) :
+ (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
+}
+
extern struct dmar_drhd_unit * dmar_find_matched_drhd_unit(struct pci_dev *dev);
extern int dmar_find_matched_atsr_unit(struct pci_dev *dev);
--
1.8.3.1
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v1 2/2] iommu/vt-d: avoid duplicated removing in __domain_mapping
2021-09-15 15:21 ` Longpeng(Mike)
@ 2021-09-30 14:14 ` Lu Baolu
-1 siblings, 0 replies; 8+ messages in thread
From: Lu Baolu @ 2021-09-30 14:14 UTC (permalink / raw)
To: Longpeng(Mike), dwmw2, joro, will
Cc: baolu.lu, iommu, linux-kernel, arei.gonglei
Hi Longpeng,
On 2021/9/15 23:21, Longpeng(Mike) wrote:
> __domain_mapping() always removes the pages in the range from
> 'iov_pfn' to 'end_pfn', but the 'end_pfn' is always the last pfn
> of the range that the caller wants to map.
>
> This would introduce too many duplicated removing and leads the
> map operation take too long, for example:
>
> Map iova=0x100000,nr_pages=0x7d61800
> iov_pfn: 0x100000, end_pfn: 0x7e617ff
> iov_pfn: 0x140000, end_pfn: 0x7e617ff
> iov_pfn: 0x180000, end_pfn: 0x7e617ff
> iov_pfn: 0x1c0000, end_pfn: 0x7e617ff
> iov_pfn: 0x200000, end_pfn: 0x7e617ff
> ...
> it takes about 50ms in total.
>
> We can reduce the cost by recalculate the 'end_pfn' and limit it
> to the boundary of the end of the pte page.
>
> Map iova=0x100000,nr_pages=0x7d61800
> iov_pfn: 0x100000, end_pfn: 0x13ffff
> iov_pfn: 0x140000, end_pfn: 0x17ffff
> iov_pfn: 0x180000, end_pfn: 0x1bffff
> iov_pfn: 0x1c0000, end_pfn: 0x1fffff
> iov_pfn: 0x200000, end_pfn: 0x23ffff
> ...
> it only need 9ms now.
>
> Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
The 0day robot reports below compiling warning when building a 32-bit
kernel (make W=1 ARCH=i386):
All errors (new ones prefixed by >>):
In file included from drivers/gpu/drm/i915/i915_drv.h:43,
from
drivers/gpu/drm/i915/display/intel_display_types.h:47,
from drivers/gpu/drm/i915/display/intel_dsi.h:30,
from <command-line>:
include/linux/intel-iommu.h: In function 'nr_pte_to_next_page':
>> include/linux/intel-iommu.h:719:3: error: cast to pointer from
integer of different size [-Werror=int-to-pointer-cast]
719 | (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
| ^
cc1: all warnings being treated as errors
vim +719 include/linux/intel-iommu.h
715
716 static inline int nr_pte_to_next_page(struct dma_pte *pte)
717 {
718 return first_pte_in_page(pte) ? BIT_ULL(VTD_STRIDE_SHIFT) :
> 719 (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
720 }
721
Can you please take a look at this?
Best regards,
baolu
> ---
> drivers/iommu/intel/iommu.c | 12 +++++++-----
> include/linux/intel-iommu.h | 6 ++++++
> 2 files changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index d75f59a..87cbf34 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -2354,12 +2354,18 @@ static void switch_to_super_page(struct dmar_domain *domain,
> return -ENOMEM;
> first_pte = pte;
>
> + lvl_pages = lvl_to_nr_pages(largepage_lvl);
> + BUG_ON(nr_pages < lvl_pages);
> +
> /* It is large page*/
> if (largepage_lvl > 1) {
> unsigned long end_pfn;
> + unsigned long pages_to_remove;
>
> pteval |= DMA_PTE_LARGE_PAGE;
> - end_pfn = ((iov_pfn + nr_pages) & level_mask(largepage_lvl)) - 1;
> + pages_to_remove = min_t(unsigned long, nr_pages,
> + nr_pte_to_next_page(pte) * lvl_pages);
> + end_pfn = iov_pfn + pages_to_remove - 1;
> switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);
> } else {
> pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
> @@ -2381,10 +2387,6 @@ static void switch_to_super_page(struct dmar_domain *domain,
> WARN_ON(1);
> }
>
> - lvl_pages = lvl_to_nr_pages(largepage_lvl);
> -
> - BUG_ON(nr_pages < lvl_pages);
> -
> nr_pages -= lvl_pages;
> iov_pfn += lvl_pages;
> phys_pfn += lvl_pages;
> diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
> index a590b00..4bff70c 100644
> --- a/include/linux/intel-iommu.h
> +++ b/include/linux/intel-iommu.h
> @@ -713,6 +713,12 @@ static inline bool first_pte_in_page(struct dma_pte *pte)
> return !((unsigned long)pte & ~VTD_PAGE_MASK);
> }
>
> +static inline int nr_pte_to_next_page(struct dma_pte *pte)
> +{
> + return first_pte_in_page(pte) ? BIT_ULL(VTD_STRIDE_SHIFT) :
> + (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
> +}
> +
> extern struct dmar_drhd_unit * dmar_find_matched_drhd_unit(struct pci_dev *dev);
> extern int dmar_find_matched_atsr_unit(struct pci_dev *dev);
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v1 2/2] iommu/vt-d: avoid duplicated removing in __domain_mapping
@ 2021-09-30 14:14 ` Lu Baolu
0 siblings, 0 replies; 8+ messages in thread
From: Lu Baolu @ 2021-09-30 14:14 UTC (permalink / raw)
To: Longpeng(Mike), dwmw2, joro, will; +Cc: iommu, arei.gonglei, linux-kernel
Hi Longpeng,
On 2021/9/15 23:21, Longpeng(Mike) wrote:
> __domain_mapping() always removes the pages in the range from
> 'iov_pfn' to 'end_pfn', but the 'end_pfn' is always the last pfn
> of the range that the caller wants to map.
>
> This would introduce too many duplicated removing and leads the
> map operation take too long, for example:
>
> Map iova=0x100000,nr_pages=0x7d61800
> iov_pfn: 0x100000, end_pfn: 0x7e617ff
> iov_pfn: 0x140000, end_pfn: 0x7e617ff
> iov_pfn: 0x180000, end_pfn: 0x7e617ff
> iov_pfn: 0x1c0000, end_pfn: 0x7e617ff
> iov_pfn: 0x200000, end_pfn: 0x7e617ff
> ...
> it takes about 50ms in total.
>
> We can reduce the cost by recalculate the 'end_pfn' and limit it
> to the boundary of the end of the pte page.
>
> Map iova=0x100000,nr_pages=0x7d61800
> iov_pfn: 0x100000, end_pfn: 0x13ffff
> iov_pfn: 0x140000, end_pfn: 0x17ffff
> iov_pfn: 0x180000, end_pfn: 0x1bffff
> iov_pfn: 0x1c0000, end_pfn: 0x1fffff
> iov_pfn: 0x200000, end_pfn: 0x23ffff
> ...
> it only need 9ms now.
>
> Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
The 0day robot reports below compiling warning when building a 32-bit
kernel (make W=1 ARCH=i386):
All errors (new ones prefixed by >>):
In file included from drivers/gpu/drm/i915/i915_drv.h:43,
from
drivers/gpu/drm/i915/display/intel_display_types.h:47,
from drivers/gpu/drm/i915/display/intel_dsi.h:30,
from <command-line>:
include/linux/intel-iommu.h: In function 'nr_pte_to_next_page':
>> include/linux/intel-iommu.h:719:3: error: cast to pointer from
integer of different size [-Werror=int-to-pointer-cast]
719 | (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
| ^
cc1: all warnings being treated as errors
vim +719 include/linux/intel-iommu.h
715
716 static inline int nr_pte_to_next_page(struct dma_pte *pte)
717 {
718 return first_pte_in_page(pte) ? BIT_ULL(VTD_STRIDE_SHIFT) :
> 719 (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
720 }
721
Can you please take a look at this?
Best regards,
baolu
> ---
> drivers/iommu/intel/iommu.c | 12 +++++++-----
> include/linux/intel-iommu.h | 6 ++++++
> 2 files changed, 13 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index d75f59a..87cbf34 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -2354,12 +2354,18 @@ static void switch_to_super_page(struct dmar_domain *domain,
> return -ENOMEM;
> first_pte = pte;
>
> + lvl_pages = lvl_to_nr_pages(largepage_lvl);
> + BUG_ON(nr_pages < lvl_pages);
> +
> /* It is large page*/
> if (largepage_lvl > 1) {
> unsigned long end_pfn;
> + unsigned long pages_to_remove;
>
> pteval |= DMA_PTE_LARGE_PAGE;
> - end_pfn = ((iov_pfn + nr_pages) & level_mask(largepage_lvl)) - 1;
> + pages_to_remove = min_t(unsigned long, nr_pages,
> + nr_pte_to_next_page(pte) * lvl_pages);
> + end_pfn = iov_pfn + pages_to_remove - 1;
> switch_to_super_page(domain, iov_pfn, end_pfn, largepage_lvl);
> } else {
> pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE;
> @@ -2381,10 +2387,6 @@ static void switch_to_super_page(struct dmar_domain *domain,
> WARN_ON(1);
> }
>
> - lvl_pages = lvl_to_nr_pages(largepage_lvl);
> -
> - BUG_ON(nr_pages < lvl_pages);
> -
> nr_pages -= lvl_pages;
> iov_pfn += lvl_pages;
> phys_pfn += lvl_pages;
> diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
> index a590b00..4bff70c 100644
> --- a/include/linux/intel-iommu.h
> +++ b/include/linux/intel-iommu.h
> @@ -713,6 +713,12 @@ static inline bool first_pte_in_page(struct dma_pte *pte)
> return !((unsigned long)pte & ~VTD_PAGE_MASK);
> }
>
> +static inline int nr_pte_to_next_page(struct dma_pte *pte)
> +{
> + return first_pte_in_page(pte) ? BIT_ULL(VTD_STRIDE_SHIFT) :
> + (struct dma_pte *)VTD_PAGE_ALIGN((unsigned long)pte) - pte;
> +}
> +
> extern struct dmar_drhd_unit * dmar_find_matched_drhd_unit(struct pci_dev *dev);
> extern int dmar_find_matched_atsr_unit(struct pci_dev *dev);
>
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-09-30 14:14 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-15 15:21 [PATCH v1 0/2] iommu/vt-d: boost the mapping process Longpeng(Mike)
2021-09-15 15:21 ` Longpeng(Mike)
2021-09-15 15:21 ` [PATCH v1 1/2] iommu/vt-d: convert the return type of first_pte_in_page to bool Longpeng(Mike)
2021-09-15 15:21 ` Longpeng(Mike)
2021-09-15 15:21 ` [PATCH v1 2/2] iommu/vt-d: avoid duplicated removing in __domain_mapping Longpeng(Mike)
2021-09-15 15:21 ` Longpeng(Mike)
2021-09-30 14:14 ` Lu Baolu
2021-09-30 14:14 ` Lu Baolu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.