All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ard Biesheuvel <ard.biesheuvel@linaro.org>
To: linux-mm@kvack.org
Cc: akpm@linux-foundation.org, mhocko@suse.com,
	zhongjiang@huawei.com, labbott@fedoraproject.org,
	mark.rutland@arm.com, linux-arm-kernel@lists.infradead.org,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>
Subject: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
Date: Thu,  8 Jun 2017 11:35:48 +0000	[thread overview]
Message-ID: <20170608113548.24905-1-ard.biesheuvel@linaro.org> (raw)

Existing code that uses vmalloc_to_page() may assume that any
address for which is_vmalloc_addr() returns true may be passed
into vmalloc_to_page() to retrieve the associated struct page.

This is not un unreasonable assumption to make, but on architectures
that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
need to ensure that vmalloc_to_page() does not go off into the weeds
trying to dereference huge PUDs or PMDs as table entries.

Given that vmalloc() and vmap() themselves never create huge
mappings or deal with compound pages at all, there is no correct
value to return in this case, so return NULL instead, and issue a
warning.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
use existing mappings' [1], and this patch now only complements it by
taking care of other vmalloc_to_page() users.

[0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
[1] http://marc.info/?l=linux-mm&m=149685966530180&w=2

 include/linux/hugetlb.h | 13 +++++++++----
 mm/vmalloc.c            |  5 +++--
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b857fc8cc2ec..b7166e5426b6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 				pmd_t *pmd, int flags);
 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
 				pud_t *pud, int flags);
-int pmd_huge(pmd_t pmd);
-int pud_huge(pud_t pud);
 unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot);
 
@@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
 #define follow_huge_pmd(mm, addr, pmd, flags)	NULL
 #define follow_huge_pud(mm, addr, pud, flags)	NULL
 #define prepare_hugepage_range(file, addr, len)	(-EINVAL)
-#define pmd_huge(x)	0
-#define pud_huge(x)	0
 #define is_hugepage_only_range(mm, addr, len)	0
 #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
 #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
@@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
 }
 
 #endif /* !CONFIG_HUGETLB_PAGE */
+
+#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
+int pmd_huge(pmd_t pmd);
+int pud_huge(pud_t pud);
+#else
+#define pmd_huge(x)	0
+#define pud_huge(x)	0
+#endif
+
 /*
  * hugepages at page global directory. If arch support
  * hugepages at pgd level, they need to define this.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 982d29511f92..67e1a304c467 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -12,6 +12,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/highmem.h>
+#include <linux/hugetlb.h>
 #include <linux/sched/signal.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
@@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
 	if (p4d_none(*p4d))
 		return NULL;
 	pud = pud_offset(p4d, addr);
-	if (pud_none(*pud))
+	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
 		return NULL;
 	pmd = pmd_offset(pud, addr);
-	if (pmd_none(*pmd))
+	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
 		return NULL;
 
 	ptep = pte_offset_map(pmd, addr);
-- 
2.9.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: ard.biesheuvel@linaro.org (Ard Biesheuvel)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings
Date: Thu,  8 Jun 2017 11:35:48 +0000	[thread overview]
Message-ID: <20170608113548.24905-1-ard.biesheuvel@linaro.org> (raw)

Existing code that uses vmalloc_to_page() may assume that any
address for which is_vmalloc_addr() returns true may be passed
into vmalloc_to_page() to retrieve the associated struct page.

This is not un unreasonable assumption to make, but on architectures
that have CONFIG_HAVE_ARCH_HUGE_VMAP=y, it no longer holds, and we
need to ensure that vmalloc_to_page() does not go off into the weeds
trying to dereference huge PUDs or PMDs as table entries.

Given that vmalloc() and vmap() themselves never create huge
mappings or deal with compound pages at all, there is no correct
value to return in this case, so return NULL instead, and issue a
warning.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
This is a followup to '[PATCH v2] mm: vmalloc: make vmalloc_to_page()
deal with PMD/PUD mappings', hence the v3. The root issue with /proc/kcore
on arm64 is now handled by '[PATCH] mm: vmalloc: simplify vread/vwrite to
use existing mappings' [1], and this patch now only complements it by
taking care of other vmalloc_to_page() users.

[0] http://marc.info/?l=linux-mm&m=149641886821855&w=2
[1] http://marc.info/?l=linux-mm&m=149685966530180&w=2

 include/linux/hugetlb.h | 13 +++++++++----
 mm/vmalloc.c            |  5 +++--
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b857fc8cc2ec..b7166e5426b6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -121,8 +121,6 @@ struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 				pmd_t *pmd, int flags);
 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
 				pud_t *pud, int flags);
-int pmd_huge(pmd_t pmd);
-int pud_huge(pud_t pud);
 unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot);
 
@@ -150,8 +148,6 @@ static inline void hugetlb_show_meminfo(void)
 #define follow_huge_pmd(mm, addr, pmd, flags)	NULL
 #define follow_huge_pud(mm, addr, pud, flags)	NULL
 #define prepare_hugepage_range(file, addr, len)	(-EINVAL)
-#define pmd_huge(x)	0
-#define pud_huge(x)	0
 #define is_hugepage_only_range(mm, addr, len)	0
 #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
 #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
@@ -190,6 +186,15 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
 }
 
 #endif /* !CONFIG_HUGETLB_PAGE */
+
+#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_HAVE_ARCH_HUGE_VMAP)
+int pmd_huge(pmd_t pmd);
+int pud_huge(pud_t pud);
+#else
+#define pmd_huge(x)	0
+#define pud_huge(x)	0
+#endif
+
 /*
  * hugepages at page global directory. If arch support
  * hugepages at pgd level, they need to define this.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 982d29511f92..67e1a304c467 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -12,6 +12,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/highmem.h>
+#include <linux/hugetlb.h>
 #include <linux/sched/signal.h>
 #include <linux/slab.h>
 #include <linux/spinlock.h>
@@ -287,10 +288,10 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
 	if (p4d_none(*p4d))
 		return NULL;
 	pud = pud_offset(p4d, addr);
-	if (pud_none(*pud))
+	if (pud_none(*pud) || WARN_ON_ONCE(pud_huge(*pud)))
 		return NULL;
 	pmd = pmd_offset(pud, addr);
-	if (pmd_none(*pmd))
+	if (pmd_none(*pmd) || WARN_ON_ONCE(pmd_huge(*pmd)))
 		return NULL;
 
 	ptep = pte_offset_map(pmd, addr);
-- 
2.9.3

             reply	other threads:[~2017-06-08 11:35 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-08 11:35 Ard Biesheuvel [this message]
2017-06-08 11:35 ` [PATCH v3] mm: huge-vmap: fail gracefully on unexpected huge vmap mappings Ard Biesheuvel
2017-06-08 11:36 ` Ard Biesheuvel
2017-06-08 11:36   ` Ard Biesheuvel
2017-06-08 17:19   ` Dave Hansen
2017-06-08 17:19     ` Dave Hansen
2017-06-08 12:59 ` Mark Rutland
2017-06-08 12:59   ` Mark Rutland
2017-06-08 13:12   ` Ard Biesheuvel
2017-06-08 13:12     ` Ard Biesheuvel
2017-06-08 13:28   ` Mark Rutland
2017-06-08 13:28     ` Mark Rutland
2017-06-08 14:51     ` Ard Biesheuvel
2017-06-08 14:51       ` Ard Biesheuvel
2017-06-08 16:37       ` Mark Rutland
2017-06-08 16:37         ` Mark Rutland
2017-06-09  4:32 ` kbuild test robot
2017-06-09  4:32   ` kbuild test robot
2017-06-09  7:45 ` kbuild test robot
2017-06-09  7:45   ` kbuild test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170608113548.24905-1-ard.biesheuvel@linaro.org \
    --to=ard.biesheuvel@linaro.org \
    --cc=akpm@linux-foundation.org \
    --cc=labbott@fedoraproject.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=mhocko@suse.com \
    --cc=zhongjiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.