All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 11:35 ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 11:35 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, mhocko, zhongjiang, labbott, mark.rutland,
	linux-arm-kernel, Ard Biesheuvel

Existing code that uses vmalloc_to_page() may assume that any
address for which is_vmalloc_addr() returns true may be passed
into vmalloc_to_page() to retrieve the associated struct page.

This is not un unreasonable assumption to make, but on architectures
that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
need to ensure that vmalloc_to_page() does not go off into the weeds
trying to dereference huge PUDs or PMDs as table entries.

Given that vmalloc() and vmap() themselves never create huge
mappings or deal with compound pages at all, there is no correct
value to return in this case, so return NULL instead, and issue a
warning.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
use existing mappings' [1], and this patch now only complements it by
taking care of other vmalloc_to_page() users.

[0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
[1] http://marc.info/?l=linux-mm&m=149685966530180&w=2

 include/linux/hugetlb.h | 13 +++++++++----
 mm/vmalloc.c            |  5 +++--
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b857fc8cc2ec..b7166e5426b6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 				pmd_t *pmd, int flags);
 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
 				pud_t *pud, int flags);
-int pmd_huge(pmd_t pmd);
-int pud_huge(pud_t pud);
 unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot);
 
@@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
 #define follow_huge_pmd(mm, addr, pmd, flags)	NULL
 #define follow_huge_pud(mm, addr, pud, flags)	NULL
 #define prepare_hugepage_range(file, addr, len)	(-EINVAL)
-#define pmd_huge(x)	0
-#define pud_huge(x)	0
 #define is_hugepage_only_range(mm, addr, len)	0
 #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
 #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
@@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
 }
 
 #endif /* !CONFIG_HUGETLB_PAGE */
+
+#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
+int pmd_huge(pmd_t pmd);
+int pud_huge(pud_t pud);
+#else
+#define pmd_huge(x)	0
+#define pud_huge(x)	0
+#endif
+
 /*
  * hugepages at page global directory. If arch support
  * hugepages at pgd level, they need to define this.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 982d29511f92..67e1a304c467 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -12,6 +12,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/highmem.h>
+#include <linux/hugetlb.h>
 #include <linux/sched/signal.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
@@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
 	if (p4d_none(*p4d))
 		return NULL;
 	pud = pud_offset(p4d, addr);
-	if (pud_none(*pud))
+	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
 		return NULL;
 	pmd = pmd_offset(pud, addr);
-	if (pmd_none(*pmd))
+	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
 		return NULL;
 
 	ptep = pte_offset_map(pmd, addr);
-- 
2.9.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 11:35 ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 11:35 UTC (permalink / raw)
  To: linux-arm-kernel

Existing code that uses vmalloc_to_page() may assume that any
address for which is_vmalloc_addr() returns true may be passed
into vmalloc_to_page() to retrieve the associated struct page.

This is not un unreasonable assumption to make, but on architectures
that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
need to ensure that vmalloc_to_page() does not go off into the weeds
trying to dereference huge PUDs or PMDs as table entries.

Given that vmalloc() and vmap() themselves never create huge
mappings or deal with compound pages at all, there is no correct
value to return in this case, so return NULL instead, and issue a
warning.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
use existing mappings' [1], and this patch now only complements it by
taking care of other vmalloc_to_page() users.

[0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
[1] http://marc.info/?l=linux-mm&m=149685966530180&w=2

 include/linux/hugetlb.h | 13 +++++++++----
 mm/vmalloc.c            |  5 +++--
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b857fc8cc2ec..b7166e5426b6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 				pmd_t *pmd, int flags);
 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
 				pud_t *pud, int flags);
-int pmd_huge(pmd_t pmd);
-int pud_huge(pud_t pud);
 unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot);
 
@@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
 #define follow_huge_pmd(mm, addr, pmd, flags)	NULL
 #define follow_huge_pud(mm, addr, pud, flags)	NULL
 #define prepare_hugepage_range(file, addr, len)	(-EINVAL)
-#define pmd_huge(x)	0
-#define pud_huge(x)	0
 #define is_hugepage_only_range(mm, addr, len)	0
 #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
 #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
@@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
 }
 
 #endif /* !CONFIG_HUGETLB_PAGE */
+
+#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
+int pmd_huge(pmd_t pmd);
+int pud_huge(pud_t pud);
+#else
+#define pmd_huge(x)	0
+#define pud_huge(x)	0
+#endif
+
 /*
  * hugepages at page global directory. If arch support
  * hugepages at pgd level, they need to define this.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 982d29511f92..67e1a304c467 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -12,6 +12,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/highmem.h>
+#include <linux/hugetlb.h>
 #include <linux/sched/signal.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
@@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
 	if (p4d_none(*p4d))
 		return NULL;
 	pud = pud_offset(p4d, addr);
-	if (pud_none(*pud))
+	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
 		return NULL;
 	pmd = pmd_offset(pud, addr);
-	if (pmd_none(*pmd))
+	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
 		return NULL;
 
 	ptep = pte_offset_map(pmd, addr);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 11:35 ` Ard Biesheuvel
@ 2017-06-08 11:36   ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 11:36 UTC (permalink / raw)
  To: linux-mm, Dave Hansen
  Cc: Andrew Morton, Michal Hocko, Zhong Jiang, Laura Abbott,
	Mark Rutland, linux-arm-kernel, Ard Biesheuvel

(+ Dave)

On 8 June 2017 at 11:35, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> Existing code that uses vmalloc_to_page() may assume that any
> address for which is_vmalloc_addr() returns true may be passed
> into vmalloc_to_page() to retrieve the associated struct page.
>
> This is not un unreasonable assumption to make, but on architectures
> that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
> need to ensure that vmalloc_to_page() does not go off into the weeds
> trying to dereference huge PUDs or PMDs as table entries.
>
> Given that vmalloc() and vmap() themselves never create huge
> mappings or deal with compound pages at all, there is no correct
> value to return in this case, so return NULL instead, and issue a
> warning.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
> deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
> on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
> use existing mappings' [1], and this patch now only complements it by
> taking care of other vmalloc_to_page() users.
>
> [0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
> [1] http://marc.info/?l=linux-mm&m=149685966530180&w=2
>
>  include/linux/hugetlb.h | 13 +++++++++----
>  mm/vmalloc.c            |  5 +++--
>  2 files changed, 12 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index b857fc8cc2ec..b7166e5426b6 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>                                 pmd_t *pmd, int flags);
>  struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
>                                 pud_t *pud, int flags);
> -int pmd_huge(pmd_t pmd);
> -int pud_huge(pud_t pud);
>  unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>                 unsigned long address, unsigned long end, pgprot_t newprot);
>
> @@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
>  #define follow_huge_pmd(mm, addr, pmd, flags)  NULL
>  #define follow_huge_pud(mm, addr, pud, flags)  NULL
>  #define prepare_hugepage_range(file, addr, len)        (-EINVAL)
> -#define pmd_huge(x)    0
> -#define pud_huge(x)    0
>  #define is_hugepage_only_range(mm, addr, len)  0
>  #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
>  #define hugetlb_fault(mm, vma, addr, flags)    ({ BUG(); 0; })
> @@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
>  }
>
>  #endif /* !CONFIG_HUGETLB_PAGE */
> +
> +#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
> +int pmd_huge(pmd_t pmd);
> +int pud_huge(pud_t pud);
> +#else
> +#define pmd_huge(x)    0
> +#define pud_huge(x)    0
> +#endif
> +
>  /*
>   * hugepages at page global directory. If arch support
>   * hugepages at pgd level, they need to define this.
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 982d29511f92..67e1a304c467 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -12,6 +12,7 @@
>  #include <linux/mm.h>
>  #include <linux/module.h>
>  #include <linux/highmem.h>
> +#include <linux/hugetlb.h>
>  #include <linux/sched/signal.h>
>  #include <linux/slab.h>
>  #include <linux/spinlock.h>
> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>         if (p4d_none(*p4d))
>                 return NULL;
>         pud = pud_offset(p4d, addr);
> -       if (pud_none(*pud))
> +       if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>                 return NULL;
>         pmd = pmd_offset(pud, addr);
> -       if (pmd_none(*pmd))
> +       if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>                 return NULL;
>
>         ptep = pte_offset_map(pmd, addr);
> --
> 2.9.3
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 11:36   ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 11:36 UTC (permalink / raw)
  To: linux-arm-kernel

(+ Dave)

On 8 June 2017 at 11:35, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> Existing code that uses vmalloc_to_page() may assume that any
> address for which is_vmalloc_addr() returns true may be passed
> into vmalloc_to_page() to retrieve the associated struct page.
>
> This is not un unreasonable assumption to make, but on architectures
> that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
> need to ensure that vmalloc_to_page() does not go off into the weeds
> trying to dereference huge PUDs or PMDs as table entries.
>
> Given that vmalloc() and vmap() themselves never create huge
> mappings or deal with compound pages at all, there is no correct
> value to return in this case, so return NULL instead, and issue a
> warning.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
> deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
> on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
> use existing mappings' [1], and this patch now only complements it by
> taking care of other vmalloc_to_page() users.
>
> [0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
> [1] http://marc.info/?l=linux-mm&m=149685966530180&w=2
>
>  include/linux/hugetlb.h | 13 +++++++++----
>  mm/vmalloc.c            |  5 +++--
>  2 files changed, 12 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index b857fc8cc2ec..b7166e5426b6 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>                                 pmd_t *pmd, int flags);
>  struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
>                                 pud_t *pud, int flags);
> -int pmd_huge(pmd_t pmd);
> -int pud_huge(pud_t pud);
>  unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>                 unsigned long address, unsigned long end, pgprot_t newprot);
>
> @@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
>  #define follow_huge_pmd(mm, addr, pmd, flags)  NULL
>  #define follow_huge_pud(mm, addr, pud, flags)  NULL
>  #define prepare_hugepage_range(file, addr, len)        (-EINVAL)
> -#define pmd_huge(x)    0
> -#define pud_huge(x)    0
>  #define is_hugepage_only_range(mm, addr, len)  0
>  #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
>  #define hugetlb_fault(mm, vma, addr, flags)    ({ BUG(); 0; })
> @@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
>  }
>
>  #endif /* !CONFIG_HUGETLB_PAGE */
> +
> +#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
> +int pmd_huge(pmd_t pmd);
> +int pud_huge(pud_t pud);
> +#else
> +#define pmd_huge(x)    0
> +#define pud_huge(x)    0
> +#endif
> +
>  /*
>   * hugepages at page global directory. If arch support
>   * hugepages at pgd level, they need to define this.
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 982d29511f92..67e1a304c467 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -12,6 +12,7 @@
>  #include <linux/mm.h>
>  #include <linux/module.h>
>  #include <linux/highmem.h>
> +#include <linux/hugetlb.h>
>  #include <linux/sched/signal.h>
>  #include <linux/slab.h>
>  #include <linux/spinlock.h>
> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>         if (p4d_none(*p4d))
>                 return NULL;
>         pud = pud_offset(p4d, addr);
> -       if (pud_none(*pud))
> +       if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>                 return NULL;
>         pmd = pmd_offset(pud, addr);
> -       if (pmd_none(*pmd))
> +       if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>                 return NULL;
>
>         ptep = pte_offset_map(pmd, addr);
> --
> 2.9.3
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 11:35 ` Ard Biesheuvel
@ 2017-06-08 12:59   ` Mark Rutland
  -1 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2017-06-08 12:59 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-mm, akpm, mhocko, zhongjiang, labbott, linux-arm-kernel

On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
> Existing code that uses vmalloc_to_page() may assume that any
> address for which is_vmalloc_addr() returns true may be passed
> into vmalloc_to_page() to retrieve the associated struct page.
> 
> This is not un unreasonable assumption to make, but on architectures
> that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
> need to ensure that vmalloc_to_page() does not go off into the weeds
> trying to dereference huge PUDs or PMDs as table entries.
> 
> Given that vmalloc() and vmap() themselves never create huge
> mappings or deal with compound pages at all, there is no correct
> value to return in this case, so return NULL instead, and issue a
> warning.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
> deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
> on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
> use existing mappings' [1], and this patch now only complements it by
> taking care of other vmalloc_to_page() users.
> 
> [0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
> [1] http://marc.info/?l=linux-mm&m=149685966530180&w=2
> 
>  include/linux/hugetlb.h | 13 +++++++++----
>  mm/vmalloc.c            |  5 +++--
>  2 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index b857fc8cc2ec..b7166e5426b6 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>  				pmd_t *pmd, int flags);
>  struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
>  				pud_t *pud, int flags);
> -int pmd_huge(pmd_t pmd);
> -int pud_huge(pud_t pud);
>  unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>  		unsigned long address, unsigned long end, pgprot_t newprot);
>  
> @@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
>  #define follow_huge_pmd(mm, addr, pmd, flags)	NULL
>  #define follow_huge_pud(mm, addr, pud, flags)	NULL
>  #define prepare_hugepage_range(file, addr, len)	(-EINVAL)
> -#define pmd_huge(x)	0
> -#define pud_huge(x)	0
>  #define is_hugepage_only_range(mm, addr, len)	0
>  #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
>  #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
> @@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
>  }
>  
>  #endif /* !CONFIG_HUGETLB_PAGE */
> +
> +#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
> +int pmd_huge(pmd_t pmd);
> +int pud_huge(pud_t pud);
> +#else
> +#define pmd_huge(x)	0
> +#define pud_huge(x)	0
> +#endif
> +
>  /*
>   * hugepages at page global directory. If arch support
>   * hugepages at pgd level, they need to define this.
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 982d29511f92..67e1a304c467 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -12,6 +12,7 @@
>  #include <linux/mm.h>
>  #include <linux/module.h>
>  #include <linux/highmem.h>
> +#include <linux/hugetlb.h>
>  #include <linux/sched/signal.h>
>  #include <linux/slab.h>
>  #include <linux/spinlock.h>
> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>  	if (p4d_none(*p4d))
>  		return NULL;
>  	pud = pud_offset(p4d, addr);
> -	if (pud_none(*pud))
> +	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>  		return NULL;
>  	pmd = pmd_offset(pud, addr);
> -	if (pmd_none(*pmd))
> +	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>  		return NULL;

I think it might be better to use p*d_bad() here, since that doesn't
depend on CONFIG_HUGETLB_PAGE.

While the cross-arch semantics are a little fuzzy, my understanding is
those should return true if an entry is not a pointer to a next level of
table (so pXd_huge(p) implies pXd_bad(p)).

Otherwise, this looks sensible to me.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 12:59   ` Mark Rutland
  0 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2017-06-08 12:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
> Existing code that uses vmalloc_to_page() may assume that any
> address for which is_vmalloc_addr() returns true may be passed
> into vmalloc_to_page() to retrieve the associated struct page.
> 
> This is not un unreasonable assumption to make, but on architectures
> that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
> need to ensure that vmalloc_to_page() does not go off into the weeds
> trying to dereference huge PUDs or PMDs as table entries.
> 
> Given that vmalloc() and vmap() themselves never create huge
> mappings or deal with compound pages at all, there is no correct
> value to return in this case, so return NULL instead, and issue a
> warning.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
> deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
> on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
> use existing mappings' [1], and this patch now only complements it by
> taking care of other vmalloc_to_page() users.
> 
> [0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
> [1] http://marc.info/?l=linux-mm&m=149685966530180&w=2
> 
>  include/linux/hugetlb.h | 13 +++++++++----
>  mm/vmalloc.c            |  5 +++--
>  2 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index b857fc8cc2ec..b7166e5426b6 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>  				pmd_t *pmd, int flags);
>  struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
>  				pud_t *pud, int flags);
> -int pmd_huge(pmd_t pmd);
> -int pud_huge(pud_t pud);
>  unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>  		unsigned long address, unsigned long end, pgprot_t newprot);
>  
> @@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
>  #define follow_huge_pmd(mm, addr, pmd, flags)	NULL
>  #define follow_huge_pud(mm, addr, pud, flags)	NULL
>  #define prepare_hugepage_range(file, addr, len)	(-EINVAL)
> -#define pmd_huge(x)	0
> -#define pud_huge(x)	0
>  #define is_hugepage_only_range(mm, addr, len)	0
>  #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
>  #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
> @@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
>  }
>  
>  #endif /* !CONFIG_HUGETLB_PAGE */
> +
> +#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
> +int pmd_huge(pmd_t pmd);
> +int pud_huge(pud_t pud);
> +#else
> +#define pmd_huge(x)	0
> +#define pud_huge(x)	0
> +#endif
> +
>  /*
>   * hugepages at page global directory. If arch support
>   * hugepages at pgd level, they need to define this.
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 982d29511f92..67e1a304c467 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -12,6 +12,7 @@
>  #include <linux/mm.h>
>  #include <linux/module.h>
>  #include <linux/highmem.h>
> +#include <linux/hugetlb.h>
>  #include <linux/sched/signal.h>
>  #include <linux/slab.h>
>  #include <linux/spinlock.h>
> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>  	if (p4d_none(*p4d))
>  		return NULL;
>  	pud = pud_offset(p4d, addr);
> -	if (pud_none(*pud))
> +	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>  		return NULL;
>  	pmd = pmd_offset(pud, addr);
> -	if (pmd_none(*pmd))
> +	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>  		return NULL;

I think it might be better to use p*d_bad() here, since that doesn't
depend on CONFIG_HUGETLB_PAGE.

While the cross-arch semantics are a little fuzzy, my understanding is
those should return true if an entry is not a pointer to a next level of
table (so pXd_huge(p) implies pXd_bad(p)).

Otherwise, this looks sensible to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 12:59   ` Mark Rutland
@ 2017-06-08 13:12     ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 13:12 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-mm, Andrew Morton, Michal Hocko, Zhong Jiang, Laura Abbott,
	linux-arm-kernel

On 8 June 2017 at 12:59, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
>> Existing code that uses vmalloc_to_page() may assume that any
>> address for which is_vmalloc_addr() returns true may be passed
>> into vmalloc_to_page() to retrieve the associated struct page.
>>
>> This is not un unreasonable assumption to make, but on architectures
>> that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
>> need to ensure that vmalloc_to_page() does not go off into the weeds
>> trying to dereference huge PUDs or PMDs as table entries.
>>
>> Given that vmalloc() and vmap() themselves never create huge
>> mappings or deal with compound pages at all, there is no correct
>> value to return in this case, so return NULL instead, and issue a
>> warning.
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
>> deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
>> on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
>> use existing mappings' [1], and this patch now only complements it by
>> taking care of other vmalloc_to_page() users.
>>
>> [0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
>> [1] http://marc.info/?l=linux-mm&m=149685966530180&w=2
>>
>>  include/linux/hugetlb.h | 13 +++++++++----
>>  mm/vmalloc.c            |  5 +++--
>>  2 files changed, 12 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index b857fc8cc2ec..b7166e5426b6 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>>                               pmd_t *pmd, int flags);
>>  struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
>>                               pud_t *pud, int flags);
>> -int pmd_huge(pmd_t pmd);
>> -int pud_huge(pud_t pud);
>>  unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>>               unsigned long address, unsigned long end, pgprot_t newprot);
>>
>> @@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
>>  #define follow_huge_pmd(mm, addr, pmd, flags)        NULL
>>  #define follow_huge_pud(mm, addr, pud, flags)        NULL
>>  #define prepare_hugepage_range(file, addr, len)      (-EINVAL)
>> -#define pmd_huge(x)  0
>> -#define pud_huge(x)  0
>>  #define is_hugepage_only_range(mm, addr, len)        0
>>  #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
>>  #define hugetlb_fault(mm, vma, addr, flags)  ({ BUG(); 0; })
>> @@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
>>  }
>>
>>  #endif /* !CONFIG_HUGETLB_PAGE */
>> +
>> +#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
>> +int pmd_huge(pmd_t pmd);
>> +int pud_huge(pud_t pud);
>> +#else
>> +#define pmd_huge(x)  0
>> +#define pud_huge(x)  0
>> +#endif
>> +
>>  /*
>>   * hugepages at page global directory. If arch support
>>   * hugepages at pgd level, they need to define this.
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 982d29511f92..67e1a304c467 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -12,6 +12,7 @@
>>  #include <linux/mm.h>
>>  #include <linux/module.h>
>>  #include <linux/highmem.h>
>> +#include <linux/hugetlb.h>
>>  #include <linux/sched/signal.h>
>>  #include <linux/slab.h>
>>  #include <linux/spinlock.h>
>> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>>       if (p4d_none(*p4d))
>>               return NULL;
>>       pud = pud_offset(p4d, addr);
>> -     if (pud_none(*pud))
>> +     if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>>               return NULL;
>>       pmd = pmd_offset(pud, addr);
>> -     if (pmd_none(*pmd))
>> +     if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>>               return NULL;
>
> I think it might be better to use p*d_bad() here, since that doesn't
> depend on CONFIG_HUGETLB_PAGE.
>
> While the cross-arch semantics are a little fuzzy, my understanding is
> those should return true if an entry is not a pointer to a next level of
> table (so pXd_huge(p) implies pXd_bad(p)).
>

Fair enough. It is slightly counter intuitive, but I guess this is due
to historical reasons, and is unlikely to change in the future.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 13:12     ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 13:12 UTC (permalink / raw)
  To: linux-arm-kernel

On 8 June 2017 at 12:59, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
>> Existing code that uses vmalloc_to_page() may assume that any
>> address for which is_vmalloc_addr() returns true may be passed
>> into vmalloc_to_page() to retrieve the associated struct page.
>>
>> This is not un unreasonable assumption to make, but on architectures
>> that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
>> need to ensure that vmalloc_to_page() does not go off into the weeds
>> trying to dereference huge PUDs or PMDs as table entries.
>>
>> Given that vmalloc() and vmap() themselves never create huge
>> mappings or deal with compound pages at all, there is no correct
>> value to return in this case, so return NULL instead, and issue a
>> warning.
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
>> deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
>> on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
>> use existing mappings' [1], and this patch now only complements it by
>> taking care of other vmalloc_to_page() users.
>>
>> [0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
>> [1] http://marc.info/?l=linux-mm&m=149685966530180&w=2
>>
>>  include/linux/hugetlb.h | 13 +++++++++----
>>  mm/vmalloc.c            |  5 +++--
>>  2 files changed, 12 insertions(+), 6 deletions(-)
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index b857fc8cc2ec..b7166e5426b6 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
>>                               pmd_t *pmd, int flags);
>>  struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
>>                               pud_t *pud, int flags);
>> -int pmd_huge(pmd_t pmd);
>> -int pud_huge(pud_t pud);
>>  unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
>>               unsigned long address, unsigned long end, pgprot_t newprot);
>>
>> @@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
>>  #define follow_huge_pmd(mm, addr, pmd, flags)        NULL
>>  #define follow_huge_pud(mm, addr, pud, flags)        NULL
>>  #define prepare_hugepage_range(file, addr, len)      (-EINVAL)
>> -#define pmd_huge(x)  0
>> -#define pud_huge(x)  0
>>  #define is_hugepage_only_range(mm, addr, len)        0
>>  #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
>>  #define hugetlb_fault(mm, vma, addr, flags)  ({ BUG(); 0; })
>> @@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
>>  }
>>
>>  #endif /* !CONFIG_HUGETLB_PAGE */
>> +
>> +#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
>> +int pmd_huge(pmd_t pmd);
>> +int pud_huge(pud_t pud);
>> +#else
>> +#define pmd_huge(x)  0
>> +#define pud_huge(x)  0
>> +#endif
>> +
>>  /*
>>   * hugepages at page global directory. If arch support
>>   * hugepages at pgd level, they need to define this.
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 982d29511f92..67e1a304c467 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -12,6 +12,7 @@
>>  #include <linux/mm.h>
>>  #include <linux/module.h>
>>  #include <linux/highmem.h>
>> +#include <linux/hugetlb.h>
>>  #include <linux/sched/signal.h>
>>  #include <linux/slab.h>
>>  #include <linux/spinlock.h>
>> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>>       if (p4d_none(*p4d))
>>               return NULL;
>>       pud = pud_offset(p4d, addr);
>> -     if (pud_none(*pud))
>> +     if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>>               return NULL;
>>       pmd = pmd_offset(pud, addr);
>> -     if (pmd_none(*pmd))
>> +     if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>>               return NULL;
>
> I think it might be better to use p*d_bad() here, since that doesn't
> depend on CONFIG_HUGETLB_PAGE.
>
> While the cross-arch semantics are a little fuzzy, my understanding is
> those should return true if an entry is not a pointer to a next level of
> table (so pXd_huge(p) implies pXd_bad(p)).
>

Fair enough. It is slightly counter intuitive, but I guess this is due
to historical reasons, and is unlikely to change in the future.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 12:59   ` Mark Rutland
@ 2017-06-08 13:28     ` Mark Rutland
  -1 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2017-06-08 13:28 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: mhocko, linux-mm, akpm, zhongjiang, linux-arm-kernel, labbott

On Thu, Jun 08, 2017 at 01:59:46PM +0100, Mark Rutland wrote:
> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
> > @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
> >  	if (p4d_none(*p4d))
> >  		return NULL;
> >  	pud = pud_offset(p4d, addr);
> > -	if (pud_none(*pud))
> > +	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
> >  		return NULL;
> >  	pmd = pmd_offset(pud, addr);
> > -	if (pmd_none(*pmd))
> > +	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
> >  		return NULL;
> 
> I think it might be better to use p*d_bad() here, since that doesn't
> depend on CONFIG_HUGETLB_PAGE.
> 
> While the cross-arch semantics are a little fuzzy, my understanding is
> those should return true if an entry is not a pointer to a next level of
> table (so pXd_huge(p) implies pXd_bad(p)).

Ugh; it turns out this isn't universally true.

I see that at least arch/hexagon's pmd_bad() always returns 0, and they
support CONFIG_HUGETLB_PAGE.

So I guess there isn't an arch-neutral, always-available way of checking
this. Sorry for having mislead you.

For arm64, p*d_bad() would still be preferable, so maybe we should check
both?

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 13:28     ` Mark Rutland
  0 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2017-06-08 13:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 08, 2017 at 01:59:46PM +0100, Mark Rutland wrote:
> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
> > @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
> >  	if (p4d_none(*p4d))
> >  		return NULL;
> >  	pud = pud_offset(p4d, addr);
> > -	if (pud_none(*pud))
> > +	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
> >  		return NULL;
> >  	pmd = pmd_offset(pud, addr);
> > -	if (pmd_none(*pmd))
> > +	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
> >  		return NULL;
> 
> I think it might be better to use p*d_bad() here, since that doesn't
> depend on CONFIG_HUGETLB_PAGE.
> 
> While the cross-arch semantics are a little fuzzy, my understanding is
> those should return true if an entry is not a pointer to a next level of
> table (so pXd_huge(p) implies pXd_bad(p)).

Ugh; it turns out this isn't universally true.

I see that at least arch/hexagon's pmd_bad() always returns 0, and they
support CONFIG_HUGETLB_PAGE.

So I guess there isn't an arch-neutral, always-available way of checking
this. Sorry for having mislead you.

For arm64, p*d_bad() would still be preferable, so maybe we should check
both?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 13:28     ` Mark Rutland
@ 2017-06-08 14:51       ` Ard Biesheuvel
  -1 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 14:51 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Michal Hocko, linux-mm, Andrew Morton, Zhong Jiang,
	linux-arm-kernel, Laura Abbott

On 8 June 2017 at 13:28, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jun 08, 2017 at 01:59:46PM +0100, Mark Rutland wrote:
>> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
>> > @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>> >     if (p4d_none(*p4d))
>> >             return NULL;
>> >     pud = pud_offset(p4d, addr);
>> > -   if (pud_none(*pud))
>> > +   if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>> >             return NULL;
>> >     pmd = pmd_offset(pud, addr);
>> > -   if (pmd_none(*pmd))
>> > +   if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>> >             return NULL;
>>
>> I think it might be better to use p*d_bad() here, since that doesn't
>> depend on CONFIG_HUGETLB_PAGE.
>>
>> While the cross-arch semantics are a little fuzzy, my understanding is
>> those should return true if an entry is not a pointer to a next level of
>> table (so pXd_huge(p) implies pXd_bad(p)).
>
> Ugh; it turns out this isn't universally true.
>
> I see that at least arch/hexagon's pmd_bad() always returns 0, and they
> support CONFIG_HUGETLB_PAGE.
>

Well, the comment in arch/hexagon/include/asm/pgtable.h suggests otherwise:

/*  HUGETLB not working currently  */


> So I guess there isn't an arch-neutral, always-available way of checking
> this. Sorry for having mislead you.
>
> For arm64, p*d_bad() would still be preferable, so maybe we should check
> both?
>

I am primarily interested in hardening architectures that define
CONFIG_HAVE_ARCH_HUGE_VMAP, given that they intentionally create huge
mappings in the VMALLOC area which this code may choke on. So whether
pmd_bad() always returns 0 on an arch that does not define
CONFIG_HAVE_ARCH_HUGE_VMAP does not really matter, because it simply
nullifies this change for that particular architecture.

So as long as x86 and arm64 [which are the only ones to define
CONFIG_HAVE_ARCH_HUGE_VMAP atm] work correctly with pXd_bad(), I think
we should use it instead of pXd_huge(),

-- 
Ard.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 14:51       ` Ard Biesheuvel
  0 siblings, 0 replies; 20+ messages in thread
From: Ard Biesheuvel @ 2017-06-08 14:51 UTC (permalink / raw)
  To: linux-arm-kernel

On 8 June 2017 at 13:28, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jun 08, 2017 at 01:59:46PM +0100, Mark Rutland wrote:
>> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
>> > @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>> >     if (p4d_none(*p4d))
>> >             return NULL;
>> >     pud = pud_offset(p4d, addr);
>> > -   if (pud_none(*pud))
>> > +   if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>> >             return NULL;
>> >     pmd = pmd_offset(pud, addr);
>> > -   if (pmd_none(*pmd))
>> > +   if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>> >             return NULL;
>>
>> I think it might be better to use p*d_bad() here, since that doesn't
>> depend on CONFIG_HUGETLB_PAGE.
>>
>> While the cross-arch semantics are a little fuzzy, my understanding is
>> those should return true if an entry is not a pointer to a next level of
>> table (so pXd_huge(p) implies pXd_bad(p)).
>
> Ugh; it turns out this isn't universally true.
>
> I see that at least arch/hexagon's pmd_bad() always returns 0, and they
> support CONFIG_HUGETLB_PAGE.
>

Well, the comment in arch/hexagon/include/asm/pgtable.h suggests otherwise:

/*  HUGETLB not working currently  */


> So I guess there isn't an arch-neutral, always-available way of checking
> this. Sorry for having mislead you.
>
> For arm64, p*d_bad() would still be preferable, so maybe we should check
> both?
>

I am primarily interested in hardening architectures that define
CONFIG_HAVE_ARCH_HUGE_VMAP, given that they intentionally create huge
mappings in the VMALLOC area which this code may choke on. So whether
pmd_bad() always returns 0 on an arch that does not define
CONFIG_HAVE_ARCH_HUGE_VMAP does not really matter, because it simply
nullifies this change for that particular architecture.

So as long as x86 and arm64 [which are the only ones to define
CONFIG_HAVE_ARCH_HUGE_VMAP atm] work correctly with pXd_bad(), I think
we should use it instead of pXd_huge(),

-- 
Ard.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 14:51       ` Ard Biesheuvel
@ 2017-06-08 16:37         ` Mark Rutland
  -1 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2017-06-08 16:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Michal Hocko, linux-mm, Andrew Morton, Zhong Jiang,
	linux-arm-kernel, Laura Abbott

On Thu, Jun 08, 2017 at 02:51:08PM +0000, Ard Biesheuvel wrote:
> On 8 June 2017 at 13:28, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Thu, Jun 08, 2017 at 01:59:46PM +0100, Mark Rutland wrote:
> >> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
> >> > @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
> >> >     if (p4d_none(*p4d))
> >> >             return NULL;
> >> >     pud = pud_offset(p4d, addr);
> >> > -   if (pud_none(*pud))
> >> > +   if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
> >> >             return NULL;
> >> >     pmd = pmd_offset(pud, addr);
> >> > -   if (pmd_none(*pmd))
> >> > +   if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
> >> >             return NULL;
> >>
> >> I think it might be better to use p*d_bad() here, since that doesn't
> >> depend on CONFIG_HUGETLB_PAGE.
> >>
> >> While the cross-arch semantics are a little fuzzy, my understanding is
> >> those should return true if an entry is not a pointer to a next level of
> >> table (so pXd_huge(p) implies pXd_bad(p)).
> >
> > Ugh; it turns out this isn't universally true.
> >
> > I see that at least arch/hexagon's pmd_bad() always returns 0, and they
> > support CONFIG_HUGETLB_PAGE.
> >
> 
> Well, the comment in arch/hexagon/include/asm/pgtable.h suggests otherwise:
> 
> /*  HUGETLB not working currently  */

Ah; I missed that.

> > So I guess there isn't an arch-neutral, always-available way of checking
> > this. Sorry for having mislead you.
> >
> > For arm64, p*d_bad() would still be preferable, so maybe we should check
> > both?
> 
> I am primarily interested in hardening architectures that define
> CONFIG_HAVE_ARCH_HUGE_VMAP, given that they intentionally create huge
> mappings in the VMALLOC area which this code may choke on. So whether
> pmd_bad() always returns 0 on an arch that does not define
> CONFIG_HAVE_ARCH_HUGE_VMAP does not really matter, because it simply
> nullifies this change for that particular architecture.
> 
> So as long as x86 and arm64 [which are the only ones to define
> CONFIG_HAVE_ARCH_HUGE_VMAP atm] work correctly with pXd_bad(), I think
> we should use it instead of pXd_huge(),

Sure; that sounds good to me.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 16:37         ` Mark Rutland
  0 siblings, 0 replies; 20+ messages in thread
From: Mark Rutland @ 2017-06-08 16:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 08, 2017 at 02:51:08PM +0000, Ard Biesheuvel wrote:
> On 8 June 2017 at 13:28, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Thu, Jun 08, 2017 at 01:59:46PM +0100, Mark Rutland wrote:
> >> On Thu, Jun 08, 2017 at 11:35:48AM +0000, Ard Biesheuvel wrote:
> >> > @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
> >> >     if (p4d_none(*p4d))
> >> >             return NULL;
> >> >     pud = pud_offset(p4d, addr);
> >> > -   if (pud_none(*pud))
> >> > +   if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
> >> >             return NULL;
> >> >     pmd = pmd_offset(pud, addr);
> >> > -   if (pmd_none(*pmd))
> >> > +   if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
> >> >             return NULL;
> >>
> >> I think it might be better to use p*d_bad() here, since that doesn't
> >> depend on CONFIG_HUGETLB_PAGE.
> >>
> >> While the cross-arch semantics are a little fuzzy, my understanding is
> >> those should return true if an entry is not a pointer to a next level of
> >> table (so pXd_huge(p) implies pXd_bad(p)).
> >
> > Ugh; it turns out this isn't universally true.
> >
> > I see that at least arch/hexagon's pmd_bad() always returns 0, and they
> > support CONFIG_HUGETLB_PAGE.
> >
> 
> Well, the comment in arch/hexagon/include/asm/pgtable.h suggests otherwise:
> 
> /*  HUGETLB not working currently  */

Ah; I missed that.

> > So I guess there isn't an arch-neutral, always-available way of checking
> > this. Sorry for having mislead you.
> >
> > For arm64, p*d_bad() would still be preferable, so maybe we should check
> > both?
> 
> I am primarily interested in hardening architectures that define
> CONFIG_HAVE_ARCH_HUGE_VMAP, given that they intentionally create huge
> mappings in the VMALLOC area which this code may choke on. So whether
> pmd_bad() always returns 0 on an arch that does not define
> CONFIG_HAVE_ARCH_HUGE_VMAP does not really matter, because it simply
> nullifies this change for that particular architecture.
> 
> So as long as x86 and arm64 [which are the only ones to define
> CONFIG_HAVE_ARCH_HUGE_VMAP atm] work correctly with pXd_bad(), I think
> we should use it instead of pXd_huge(),

Sure; that sounds good to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 11:36   ` Ard Biesheuvel
@ 2017-06-08 17:19     ` Dave Hansen
  -1 siblings, 0 replies; 20+ messages in thread
From: Dave Hansen @ 2017-06-08 17:19 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-mm
  Cc: Andrew Morton, Michal Hocko, Zhong Jiang, Laura Abbott,
	Mark Rutland, linux-arm-kernel

On 06/08/2017 04:36 AM, Ard Biesheuvel wrote:
> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>         if (p4d_none(*p4d))
>                 return NULL;
>         pud = pud_offset(p4d, addr);
> -       if (pud_none(*pud))
> +       if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>                 return NULL;
>         pmd = pmd_offset(pud, addr);
> -       if (pmd_none(*pmd))
> +       if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>                 return NULL;

Seems sane to me.  It might be nice to actually comment this, though, on
why huge vmalloc_to_page() is unsupported.

Also, not a big deal, but I tend to filter out the contents of
WARN_ON_ONCE() when trying to figure out what code does, so I think I'd
rather this be:

	WARN_ON_ONCE(pmd_huge(*pmd));
	if (pmd_none(*pmd) || pmd_huge(*pmd))
		...

But, again, not a big deal.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-08 17:19     ` Dave Hansen
  0 siblings, 0 replies; 20+ messages in thread
From: Dave Hansen @ 2017-06-08 17:19 UTC (permalink / raw)
  To: linux-arm-kernel

On 06/08/2017 04:36 AM, Ard Biesheuvel wrote:
> @@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>         if (p4d_none(*p4d))
>                 return NULL;
>         pud = pud_offset(p4d, addr);
> -       if (pud_none(*pud))
> +       if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
>                 return NULL;
>         pmd = pmd_offset(pud, addr);
> -       if (pmd_none(*pmd))
> +       if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
>                 return NULL;

Seems sane to me.  It might be nice to actually comment this, though, on
why huge vmalloc_to_page() is unsupported.

Also, not a big deal, but I tend to filter out the contents of
WARN_ON_ONCE() when trying to figure out what code does, so I think I'd
rather this be:

	WARN_ON_ONCE(pmd_huge(*pmd));
	if (pmd_none(*pmd) || pmd_huge(*pmd))
		...

But, again, not a big deal.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 11:35 ` Ard Biesheuvel
@ 2017-06-09  4:32   ` kbuild test robot
  -1 siblings, 0 replies; 20+ messages in thread
From: kbuild test robot @ 2017-06-09  4:32 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: kbuild-all, linux-mm, akpm, mhocko, zhongjiang, labbott,
	mark.rutland, linux-arm-kernel

[-- Attachment #1: Type: text/plain, Size: 1734 bytes --]

Hi Ard,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc4]
[cannot apply to next-20170608]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/mm-huge-vmap-fail-gracefully-on-unexpected-huge-vmap-mappings/20170609-093900
config: x86_64-kexec (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   arch/x86/built-in.o: In function `vmalloc_fault':
>> fault.c:(.text+0x422c5): undefined reference to `pud_huge'
>> fault.c:(.text+0x4238e): undefined reference to `pmd_huge'
   mm/built-in.o: In function `follow_page_mask':
>> (.text+0x2fcde): undefined reference to `pud_huge'
   mm/built-in.o: In function `follow_page_mask':
>> (.text+0x2fda8): undefined reference to `pmd_huge'
   mm/built-in.o: In function `__follow_pte_pmd.isra.10':
>> memory.c:(.text+0x31b91): undefined reference to `pmd_huge'
   memory.c:(.text+0x31bbf): undefined reference to `pmd_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x34e77): undefined reference to `pud_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x35043): undefined reference to `pmd_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x421a0): undefined reference to `pud_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x421ec): undefined reference to `pmd_huge'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 25324 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-09  4:32   ` kbuild test robot
  0 siblings, 0 replies; 20+ messages in thread
From: kbuild test robot @ 2017-06-09  4:32 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Ard,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc4]
[cannot apply to next-20170608]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/mm-huge-vmap-fail-gracefully-on-unexpected-huge-vmap-mappings/20170609-093900
config: x86_64-kexec (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

All errors (new ones prefixed by >>):

   arch/x86/built-in.o: In function `vmalloc_fault':
>> fault.c:(.text+0x422c5): undefined reference to `pud_huge'
>> fault.c:(.text+0x4238e): undefined reference to `pmd_huge'
   mm/built-in.o: In function `follow_page_mask':
>> (.text+0x2fcde): undefined reference to `pud_huge'
   mm/built-in.o: In function `follow_page_mask':
>> (.text+0x2fda8): undefined reference to `pmd_huge'
   mm/built-in.o: In function `__follow_pte_pmd.isra.10':
>> memory.c:(.text+0x31b91): undefined reference to `pmd_huge'
   memory.c:(.text+0x31bbf): undefined reference to `pmd_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x34e77): undefined reference to `pud_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x35043): undefined reference to `pmd_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x421a0): undefined reference to `pud_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x421ec): undefined reference to `pmd_huge'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/gzip
Size: 25324 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20170609/379ddcad/attachment-0001.gz>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
  2017-06-08 11:35 ` Ard Biesheuvel
@ 2017-06-09  7:45   ` kbuild test robot
  -1 siblings, 0 replies; 20+ messages in thread
From: kbuild test robot @ 2017-06-09  7:45 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: kbuild-all, linux-mm, akpm, mhocko, zhongjiang, labbott,
	mark.rutland, linux-arm-kernel

[-- Attachment #1: Type: text/plain, Size: 1686 bytes --]

Hi Ard,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc4]
[cannot apply to next-20170608]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/mm-huge-vmap-fail-gracefully-on-unexpected-huge-vmap-mappings/20170609-093900
config: i386-randconfig-i1-06042254 (attached as .config)
compiler: gcc-4.8 (Debian 4.8.4-1) 4.8.4
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   arch/x86/built-in.o: In function `vmalloc_fault':
>> fault.c:(.text.unlikely+0x1765): undefined reference to `pmd_huge'
   mm/built-in.o: In function `follow_page_mask':
   (.text+0x21da6): undefined reference to `pud_huge'
   mm/built-in.o: In function `follow_page_mask':
   (.text+0x21e39): undefined reference to `pmd_huge'
   mm/built-in.o: In function `__follow_pte_pmd.isra.86':
   memory.c:(.text+0x2381c): undefined reference to `pmd_huge'
   memory.c:(.text+0x2384a): undefined reference to `pmd_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x2648d): undefined reference to `pud_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x26608): undefined reference to `pmd_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x30ea3): undefined reference to `pud_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x30ec8): undefined reference to `pmd_huge'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 27823 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
@ 2017-06-09  7:45   ` kbuild test robot
  0 siblings, 0 replies; 20+ messages in thread
From: kbuild test robot @ 2017-06-09  7:45 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Ard,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc4]
[cannot apply to next-20170608]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/mm-huge-vmap-fail-gracefully-on-unexpected-huge-vmap-mappings/20170609-093900
config: i386-randconfig-i1-06042254 (attached as .config)
compiler: gcc-4.8 (Debian 4.8.4-1) 4.8.4
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

   arch/x86/built-in.o: In function `vmalloc_fault':
>> fault.c:(.text.unlikely+0x1765): undefined reference to `pmd_huge'
   mm/built-in.o: In function `follow_page_mask':
   (.text+0x21da6): undefined reference to `pud_huge'
   mm/built-in.o: In function `follow_page_mask':
   (.text+0x21e39): undefined reference to `pmd_huge'
   mm/built-in.o: In function `__follow_pte_pmd.isra.86':
   memory.c:(.text+0x2381c): undefined reference to `pmd_huge'
   memory.c:(.text+0x2384a): undefined reference to `pmd_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x2648d): undefined reference to `pud_huge'
   mm/built-in.o: In function `apply_to_page_range':
   (.text+0x26608): undefined reference to `pmd_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x30ea3): undefined reference to `pud_huge'
   mm/built-in.o: In function `vmalloc_to_page':
   (.text+0x30ec8): undefined reference to `pmd_huge'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/gzip
Size: 27823 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20170609/e3264f7f/attachment-0001.gz>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-06-09  7:46 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-08 11:35 [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings Ard Biesheuvel
2017-06-08 11:35 ` Ard Biesheuvel
2017-06-08 11:36 ` Ard Biesheuvel
2017-06-08 11:36   ` Ard Biesheuvel
2017-06-08 17:19   ` Dave Hansen
2017-06-08 17:19     ` Dave Hansen
2017-06-08 12:59 ` Mark Rutland
2017-06-08 12:59   ` Mark Rutland
2017-06-08 13:12   ` Ard Biesheuvel
2017-06-08 13:12     ` Ard Biesheuvel
2017-06-08 13:28   ` Mark Rutland
2017-06-08 13:28     ` Mark Rutland
2017-06-08 14:51     ` Ard Biesheuvel
2017-06-08 14:51       ` Ard Biesheuvel
2017-06-08 16:37       ` Mark Rutland
2017-06-08 16:37         ` Mark Rutland
2017-06-09  4:32 ` kbuild test robot
2017-06-09  4:32   ` kbuild test robot
2017-06-09  7:45 ` kbuild test robot
2017-06-09  7:45   ` kbuild test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.