* [PATCH v10 00/12] huge vmalloc mappings
@ 2021-01-24 8:22 Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Nicholas Piggin
` (11 more replies)
0 siblings, 12 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
Fixed a couple of bugs that Ding noticed in review and testing.
Thanks,
Nick
Since v9:
- Fixed intermediate build breakage on x86-32 !PAE [thanks Ding]
- Fixed small page fallback case vm_struct double-free [thanks Ding]
Since v8:
- Fixed nommu compile.
- Added Kconfig option help text
- Added VM_NOHUGE which should help archs implement it [suggested by Rick]
Since v7:
- Rebase, added some acks, compile fix
- Removed "order=" from vmallocinfo, it's a bit confusing (nr_pages
is in small page size for compatibility).
- Added arch_vmap_pmd_supported() test before starting to allocate
the large page, rather than only testing it when doing the map, to
avoid unsupported configs trying to allocate huge pages for no
reason.
Since v6:
- Fixed a false positive warning introduced in patch 2, found by
kbuild test robot.
Since v5:
- Split arch changes out better and make the constant folding work
- Avoid most of the 80 column wrap, fix a reference to lib/ioremap.c
- Fix compile error on some archs
Since v4:
- Fixed an off-by-page-order bug in v4
- Several minor cleanups.
- Added page order to /proc/vmallocinfo
- Added hugepage to alloc_large_system_hage output.
- Made an architecture config option, powerpc only for now.
Since v3:
- Fixed an off-by-one bug in a loop
- Fix !CONFIG_HAVE_ARCH_HUGE_VMAP build fail
*** BLURB HERE ***
Nicholas Piggin (12):
mm/vmalloc: fix vmalloc_to_page for huge vmap mappings
mm: apply_to_pte_range warn and fail if a large pte is encountered
mm/vmalloc: rename vmap_*_range vmap_pages_*_range
mm/ioremap: rename ioremap_*_range to vmap_*_range
mm: HUGE_VMAP arch support cleanup
powerpc: inline huge vmap supported functions
arm64: inline huge vmap supported functions
x86: inline huge vmap supported functions
mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c
mm/vmalloc: add vmap_range_noflush variant
mm/vmalloc: Hugepage vmalloc mappings
powerpc/64s/radix: Enable huge vmalloc mappings
.../admin-guide/kernel-parameters.txt | 2 +
arch/Kconfig | 10 +
arch/arm64/include/asm/vmalloc.h | 25 +
arch/arm64/mm/mmu.c | 26 -
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/vmalloc.h | 21 +
arch/powerpc/kernel/module.c | 13 +-
arch/powerpc/mm/book3s64/radix_pgtable.c | 21 -
arch/x86/include/asm/vmalloc.h | 23 +
arch/x86/mm/ioremap.c | 19 -
arch/x86/mm/pgtable.c | 13 -
include/linux/io.h | 9 -
include/linux/vmalloc.h | 27 ++
init/main.c | 1 -
mm/ioremap.c | 225 +--------
mm/memory.c | 66 ++-
mm/page_alloc.c | 5 +-
mm/vmalloc.c | 455 +++++++++++++++---
18 files changed, 563 insertions(+), 399 deletions(-)
--
2.23.0
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH v10 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 11:31 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered Nicholas Piggin
` (10 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
vmalloc_to_page returns NULL for addresses mapped by larger pages[*].
Whether or not a vmap is huge depends on the architecture details,
alignments, boot options, etc., which the caller can not be expected
to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page.
This change teaches vmalloc_to_page about larger pages, and returns
the struct page that corresponds to the offset within the large page.
This makes the API agnostic to mapping implementation details.
[*] As explained by commit 029c54b095995 ("mm/vmalloc.c: huge-vmap:
fail gracefully on unexpected huge vmap mappings")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
mm/vmalloc.c | 41 ++++++++++++++++++++++++++---------------
1 file changed, 26 insertions(+), 15 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e6f352bf0498..62372f9e0167 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -34,7 +34,7 @@
#include <linux/bitops.h>
#include <linux/rbtree_augmented.h>
#include <linux/overflow.h>
-
+#include <linux/pgtable.h>
#include <linux/uaccess.h>
#include <asm/tlbflush.h>
#include <asm/shmparam.h>
@@ -343,7 +343,9 @@ int is_vmalloc_or_module_addr(const void *x)
}
/*
- * Walk a vmap address to the struct page it maps.
+ * Walk a vmap address to the struct page it maps. Huge vmap mappings will
+ * return the tail page that corresponds to the base page address, which
+ * matches small vmap mappings.
*/
struct page *vmalloc_to_page(const void *vmalloc_addr)
{
@@ -363,25 +365,33 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
if (pgd_none(*pgd))
return NULL;
+ if (WARN_ON_ONCE(pgd_leaf(*pgd)))
+ return NULL; /* XXX: no allowance for huge pgd */
+ if (WARN_ON_ONCE(pgd_bad(*pgd)))
+ return NULL;
+
p4d = p4d_offset(pgd, addr);
if (p4d_none(*p4d))
return NULL;
- pud = pud_offset(p4d, addr);
+ if (p4d_leaf(*p4d))
+ return p4d_page(*p4d) + ((addr & ~P4D_MASK) >> PAGE_SHIFT);
+ if (WARN_ON_ONCE(p4d_bad(*p4d)))
+ return NULL;
- /*
- * Don't dereference bad PUD or PMD (below) entries. This will also
- * identify huge mappings, which we may encounter on architectures
- * that define CONFIG_HAVE_ARCH_HUGE_VMAP=y. Such regions will be
- * identified as vmalloc addresses by is_vmalloc_addr(), but are
- * not [unambiguously] associated with a struct page, so there is
- * no correct value to return for them.
- */
- WARN_ON_ONCE(pud_bad(*pud));
- if (pud_none(*pud) || pud_bad(*pud))
+ pud = pud_offset(p4d, addr);
+ if (pud_none(*pud))
+ return NULL;
+ if (pud_leaf(*pud))
+ return pud_page(*pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
+ if (WARN_ON_ONCE(pud_bad(*pud)))
return NULL;
+
pmd = pmd_offset(pud, addr);
- WARN_ON_ONCE(pmd_bad(*pmd));
- if (pmd_none(*pmd) || pmd_bad(*pmd))
+ if (pmd_none(*pmd))
+ return NULL;
+ if (pmd_leaf(*pmd))
+ return pmd_page(*pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+ if (WARN_ON_ONCE(pmd_bad(*pmd)))
return NULL;
ptep = pte_offset_map(pmd, addr);
@@ -389,6 +399,7 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
if (pte_present(pte))
page = pte_page(pte);
pte_unmap(ptep);
+
return page;
}
EXPORT_SYMBOL(vmalloc_to_page);
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 11:32 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Nicholas Piggin
` (9 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
apply_to_pte_range might mistake a large pte for bad, or treat it as a
page table, resulting in a crash or corruption. Add a test to warn and
return error if large entries are found.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
mm/memory.c | 66 +++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 49 insertions(+), 17 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index feff48e1465a..672e39a72788 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2440,13 +2440,21 @@ static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud,
}
do {
next = pmd_addr_end(addr, end);
- if (create || !pmd_none_or_clear_bad(pmd)) {
- err = apply_to_pte_range(mm, pmd, addr, next, fn, data,
- create, mask);
- if (err)
- break;
+ if (pmd_none(*pmd) && !create)
+ continue;
+ if (WARN_ON_ONCE(pmd_leaf(*pmd)))
+ return -EINVAL;
+ if (!pmd_none(*pmd) && WARN_ON_ONCE(pmd_bad(*pmd))) {
+ if (!create)
+ continue;
+ pmd_clear_bad(pmd);
}
+ err = apply_to_pte_range(mm, pmd, addr, next,
+ fn, data, create, mask);
+ if (err)
+ break;
} while (pmd++, addr = next, addr != end);
+
return err;
}
@@ -2468,13 +2476,21 @@ static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d,
}
do {
next = pud_addr_end(addr, end);
- if (create || !pud_none_or_clear_bad(pud)) {
- err = apply_to_pmd_range(mm, pud, addr, next, fn, data,
- create, mask);
- if (err)
- break;
+ if (pud_none(*pud) && !create)
+ continue;
+ if (WARN_ON_ONCE(pud_leaf(*pud)))
+ return -EINVAL;
+ if (!pud_none(*pud) && WARN_ON_ONCE(pud_bad(*pud))) {
+ if (!create)
+ continue;
+ pud_clear_bad(pud);
}
+ err = apply_to_pmd_range(mm, pud, addr, next,
+ fn, data, create, mask);
+ if (err)
+ break;
} while (pud++, addr = next, addr != end);
+
return err;
}
@@ -2496,13 +2512,21 @@ static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd,
}
do {
next = p4d_addr_end(addr, end);
- if (create || !p4d_none_or_clear_bad(p4d)) {
- err = apply_to_pud_range(mm, p4d, addr, next, fn, data,
- create, mask);
- if (err)
- break;
+ if (p4d_none(*p4d) && !create)
+ continue;
+ if (WARN_ON_ONCE(p4d_leaf(*p4d)))
+ return -EINVAL;
+ if (!p4d_none(*p4d) && WARN_ON_ONCE(p4d_bad(*p4d))) {
+ if (!create)
+ continue;
+ p4d_clear_bad(p4d);
}
+ err = apply_to_pud_range(mm, p4d, addr, next,
+ fn, data, create, mask);
+ if (err)
+ break;
} while (p4d++, addr = next, addr != end);
+
return err;
}
@@ -2522,9 +2546,17 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
pgd = pgd_offset(mm, addr);
do {
next = pgd_addr_end(addr, end);
- if (!create && pgd_none_or_clear_bad(pgd))
+ if (pgd_none(*pgd) && !create)
continue;
- err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create, &mask);
+ if (WARN_ON_ONCE(pgd_leaf(*pgd)))
+ return -EINVAL;
+ if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) {
+ if (!create)
+ continue;
+ pgd_clear_bad(pgd);
+ }
+ err = apply_to_p4d_range(mm, pgd, addr, next,
+ fn, data, create, &mask);
if (err)
break;
} while (pgd++, addr = next, addr != end);
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 11:34 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range Nicholas Piggin
` (8 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
The vmalloc mapper operates on a struct page * array rather than a
linear physical address, re-name it to make this distinction clear.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
mm/vmalloc.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 62372f9e0167..7f2f36116980 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -189,7 +189,7 @@ void unmap_kernel_range_noflush(unsigned long start, unsigned long size)
arch_sync_kernel_mappings(start, end);
}
-static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
+static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long end, pgprot_t prot, struct page **pages, int *nr,
pgtbl_mod_mask *mask)
{
@@ -217,7 +217,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
return 0;
}
-static int vmap_pmd_range(pud_t *pud, unsigned long addr,
+static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr,
unsigned long end, pgprot_t prot, struct page **pages, int *nr,
pgtbl_mod_mask *mask)
{
@@ -229,13 +229,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr,
return -ENOMEM;
do {
next = pmd_addr_end(addr, end);
- if (vmap_pte_range(pmd, addr, next, prot, pages, nr, mask))
+ if (vmap_pages_pte_range(pmd, addr, next, prot, pages, nr, mask))
return -ENOMEM;
} while (pmd++, addr = next, addr != end);
return 0;
}
-static int vmap_pud_range(p4d_t *p4d, unsigned long addr,
+static int vmap_pages_pud_range(p4d_t *p4d, unsigned long addr,
unsigned long end, pgprot_t prot, struct page **pages, int *nr,
pgtbl_mod_mask *mask)
{
@@ -247,13 +247,13 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr,
return -ENOMEM;
do {
next = pud_addr_end(addr, end);
- if (vmap_pmd_range(pud, addr, next, prot, pages, nr, mask))
+ if (vmap_pages_pmd_range(pud, addr, next, prot, pages, nr, mask))
return -ENOMEM;
} while (pud++, addr = next, addr != end);
return 0;
}
-static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
+static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr,
unsigned long end, pgprot_t prot, struct page **pages, int *nr,
pgtbl_mod_mask *mask)
{
@@ -265,7 +265,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
return -ENOMEM;
do {
next = p4d_addr_end(addr, end);
- if (vmap_pud_range(p4d, addr, next, prot, pages, nr, mask))
+ if (vmap_pages_pud_range(p4d, addr, next, prot, pages, nr, mask))
return -ENOMEM;
} while (p4d++, addr = next, addr != end);
return 0;
@@ -306,7 +306,7 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size,
next = pgd_addr_end(addr, end);
if (pgd_bad(*pgd))
mask |= PGTBL_PGD_MODIFIED;
- err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr, &mask);
+ err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask);
if (err)
return err;
} while (pgd++, addr = next, addr != end);
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (2 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 11:36 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup Nicholas Piggin
` (7 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
This will be used as a generic kernel virtual mapping function, so
re-name it in preparation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
mm/ioremap.c | 64 +++++++++++++++++++++++++++-------------------------
1 file changed, 33 insertions(+), 31 deletions(-)
diff --git a/mm/ioremap.c b/mm/ioremap.c
index 5fa1ab41d152..3f4d36f9745a 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -61,9 +61,9 @@ static inline int ioremap_pud_enabled(void) { return 0; }
static inline int ioremap_pmd_enabled(void) { return 0; }
#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
-static int ioremap_pte_range(pmd_t *pmd, unsigned long addr,
- unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
+static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ pgtbl_mod_mask *mask)
{
pte_t *pte;
u64 pfn;
@@ -81,9 +81,8 @@ static int ioremap_pte_range(pmd_t *pmd, unsigned long addr,
return 0;
}
-static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr,
- unsigned long end, phys_addr_t phys_addr,
- pgprot_t prot)
+static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot)
{
if (!ioremap_pmd_enabled())
return 0;
@@ -103,9 +102,9 @@ static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr,
return pmd_set_huge(pmd, phys_addr, prot);
}
-static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
- unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
+static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ pgtbl_mod_mask *mask)
{
pmd_t *pmd;
unsigned long next;
@@ -116,20 +115,19 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
do {
next = pmd_addr_end(addr, end);
- if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) {
+ if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) {
*mask |= PGTBL_PMD_MODIFIED;
continue;
}
- if (ioremap_pte_range(pmd, addr, next, phys_addr, prot, mask))
+ if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask))
return -ENOMEM;
} while (pmd++, phys_addr += (next - addr), addr = next, addr != end);
return 0;
}
-static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr,
- unsigned long end, phys_addr_t phys_addr,
- pgprot_t prot)
+static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot)
{
if (!ioremap_pud_enabled())
return 0;
@@ -149,9 +147,9 @@ static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr,
return pud_set_huge(pud, phys_addr, prot);
}
-static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
- unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
+static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ pgtbl_mod_mask *mask)
{
pud_t *pud;
unsigned long next;
@@ -162,20 +160,19 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
do {
next = pud_addr_end(addr, end);
- if (ioremap_try_huge_pud(pud, addr, next, phys_addr, prot)) {
+ if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) {
*mask |= PGTBL_PUD_MODIFIED;
continue;
}
- if (ioremap_pmd_range(pud, addr, next, phys_addr, prot, mask))
+ if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask))
return -ENOMEM;
} while (pud++, phys_addr += (next - addr), addr = next, addr != end);
return 0;
}
-static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr,
- unsigned long end, phys_addr_t phys_addr,
- pgprot_t prot)
+static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot)
{
if (!ioremap_p4d_enabled())
return 0;
@@ -195,9 +192,9 @@ static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr,
return p4d_set_huge(p4d, phys_addr, prot);
}
-static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr,
- unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
+static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ pgtbl_mod_mask *mask)
{
p4d_t *p4d;
unsigned long next;
@@ -208,19 +205,19 @@ static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr,
do {
next = p4d_addr_end(addr, end);
- if (ioremap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) {
+ if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) {
*mask |= PGTBL_P4D_MODIFIED;
continue;
}
- if (ioremap_pud_range(p4d, addr, next, phys_addr, prot, mask))
+ if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask))
return -ENOMEM;
} while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
return 0;
}
-int ioremap_page_range(unsigned long addr,
- unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
+static int vmap_range(unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot)
{
pgd_t *pgd;
unsigned long start;
@@ -235,8 +232,7 @@ int ioremap_page_range(unsigned long addr,
pgd = pgd_offset_k(addr);
do {
next = pgd_addr_end(addr, end);
- err = ioremap_p4d_range(pgd, addr, next, phys_addr, prot,
- &mask);
+ err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask);
if (err)
break;
} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
@@ -249,6 +245,12 @@ int ioremap_page_range(unsigned long addr,
return err;
}
+int ioremap_page_range(unsigned long addr,
+ unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
+{
+ return vmap_range(addr, end, phys_addr, prot);
+}
+
#ifdef CONFIG_GENERIC_IOREMAP
void __iomem *ioremap_prot(phys_addr_t addr, size_t size, unsigned long prot)
{
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (3 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 11:40 ` Christoph Hellwig
2021-01-25 8:40 ` Christophe Leroy
2021-01-24 8:22 ` [PATCH v10 06/12] powerpc: inline huge vmap supported functions Nicholas Piggin
` (6 subsequent siblings)
11 siblings, 2 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong, Catalin Marinas, Will Deacon,
linux-arm-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
x86, H. Peter Anvin
This changes the awkward approach where architectures provide init
functions to determine which levels they can provide large mappings for,
to one where the arch is queried for each call.
This removes code and indirection, and allows constant-folding of dead
code for unsupported levels.
This also adds a prot argument to the arch query. This is unused
currently but could help with some architectures (e.g., some powerpc
processors can't map uncacheable memory with large pages).
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/arm64/include/asm/vmalloc.h | 8 +++
arch/arm64/mm/mmu.c | 10 +--
arch/powerpc/include/asm/vmalloc.h | 8 +++
arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +--
arch/x86/include/asm/vmalloc.h | 7 ++
arch/x86/mm/ioremap.c | 12 ++--
include/linux/io.h | 9 ---
include/linux/vmalloc.h | 6 ++
init/main.c | 1 -
mm/ioremap.c | 88 +++++++++---------------
10 files changed, 79 insertions(+), 78 deletions(-)
diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index 2ca708ab9b20..597b40405319 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -1,4 +1,12 @@
#ifndef _ASM_ARM64_VMALLOC_H
#define _ASM_ARM64_VMALLOC_H
+#include <asm/page.h>
+
+#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
+bool arch_vmap_p4d_supported(pgprot_t prot);
+bool arch_vmap_pud_supported(pgprot_t prot);
+bool arch_vmap_pmd_supported(pgprot_t prot);
+#endif
+
#endif /* _ASM_ARM64_VMALLOC_H */
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ae0c3d023824..f6614c378792 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1313,12 +1313,12 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
return dt_virt;
}
-int __init arch_ioremap_p4d_supported(void)
+bool arch_vmap_p4d_supported(pgprot_t prot)
{
- return 0;
+ return false;
}
-int __init arch_ioremap_pud_supported(void)
+bool arch_vmap_pud_supported(pgprot_t prot);
{
/*
* Only 4k granule supports level 1 block mappings.
@@ -1328,9 +1328,9 @@ int __init arch_ioremap_pud_supported(void)
!IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
}
-int __init arch_ioremap_pmd_supported(void)
+bool arch_vmap_pmd_supported(pgprot_t prot)
{
- /* See arch_ioremap_pud_supported() */
+ /* See arch_vmap_pud_supported() */
return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
}
diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h
index b992dfaaa161..105abb73f075 100644
--- a/arch/powerpc/include/asm/vmalloc.h
+++ b/arch/powerpc/include/asm/vmalloc.h
@@ -1,4 +1,12 @@
#ifndef _ASM_POWERPC_VMALLOC_H
#define _ASM_POWERPC_VMALLOC_H
+#include <asm/page.h>
+
+#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
+bool arch_vmap_p4d_supported(pgprot_t prot);
+bool arch_vmap_pud_supported(pgprot_t prot);
+bool arch_vmap_pmd_supported(pgprot_t prot);
+#endif
+
#endif /* _ASM_POWERPC_VMALLOC_H */
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 98f0b243c1ab..743807fc210f 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1082,13 +1082,13 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
set_pte_at(mm, addr, ptep, pte);
}
-int __init arch_ioremap_pud_supported(void)
+bool arch_vmap_pud_supported(pgprot_t prot)
{
/* HPT does not cope with large pages in the vmalloc area */
return radix_enabled();
}
-int __init arch_ioremap_pmd_supported(void)
+bool arch_vmap_pmd_supported(pgprot_t prot)
{
return radix_enabled();
}
@@ -1182,7 +1182,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
return 1;
}
-int __init arch_ioremap_p4d_supported(void)
+bool arch_vmap_p4d_supported(pgprot_t prot)
{
- return 0;
+ return false;
}
diff --git a/arch/x86/include/asm/vmalloc.h b/arch/x86/include/asm/vmalloc.h
index 29837740b520..094ea2b565f3 100644
--- a/arch/x86/include/asm/vmalloc.h
+++ b/arch/x86/include/asm/vmalloc.h
@@ -1,6 +1,13 @@
#ifndef _ASM_X86_VMALLOC_H
#define _ASM_X86_VMALLOC_H
+#include <asm/page.h>
#include <asm/pgtable_areas.h>
+#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
+bool arch_vmap_p4d_supported(pgprot_t prot);
+bool arch_vmap_pud_supported(pgprot_t prot);
+bool arch_vmap_pmd_supported(pgprot_t prot);
+#endif
+
#endif /* _ASM_X86_VMALLOC_H */
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 9e5ccc56f8e0..fbaf0c447986 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -481,24 +481,26 @@ void iounmap(volatile void __iomem *addr)
}
EXPORT_SYMBOL(iounmap);
-int __init arch_ioremap_p4d_supported(void)
+#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
+bool arch_vmap_p4d_supported(pgprot_t prot)
{
- return 0;
+ return false;
}
-int __init arch_ioremap_pud_supported(void)
+bool arch_vmap_pud_supported(pgprot_t prot)
{
#ifdef CONFIG_X86_64
return boot_cpu_has(X86_FEATURE_GBPAGES);
#else
- return 0;
+ return false;
#endif
}
-int __init arch_ioremap_pmd_supported(void)
+bool arch_vmap_pmd_supported(pgprot_t prot)
{
return boot_cpu_has(X86_FEATURE_PSE);
}
+#endif
/*
* Convert a physical pointer to a virtual kernel pointer for /dev/mem
diff --git a/include/linux/io.h b/include/linux/io.h
index 8394c56babc2..f1effd4d7a3c 100644
--- a/include/linux/io.h
+++ b/include/linux/io.h
@@ -31,15 +31,6 @@ static inline int ioremap_page_range(unsigned long addr, unsigned long end,
}
#endif
-#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
-void __init ioremap_huge_init(void);
-int arch_ioremap_p4d_supported(void);
-int arch_ioremap_pud_supported(void);
-int arch_ioremap_pmd_supported(void);
-#else
-static inline void ioremap_huge_init(void) { }
-#endif
-
/*
* Managed iomap interface
*/
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 80c0181c411d..00bd62bd701e 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -83,6 +83,12 @@ struct vmap_area {
};
};
+#ifndef CONFIG_HAVE_ARCH_HUGE_VMAP
+static inline bool arch_vmap_p4d_supported(pgprot_t prot) { return false; }
+static inline bool arch_vmap_pud_supported(pgprot_t prot) { return false; }
+static inline bool arch_vmap_pmd_supported(pgprot_t prot) { return false; }
+#endif
+
/*
* Highlevel APIs for driver use
*/
diff --git a/init/main.c b/init/main.c
index c68d784376ca..bf9389e5b2e4 100644
--- a/init/main.c
+++ b/init/main.c
@@ -834,7 +834,6 @@ static void __init mm_init(void)
pgtable_init();
debug_objects_mem_init();
vmalloc_init();
- ioremap_huge_init();
/* Should be run before the first non-init thread is created */
init_espfix_bsp();
/* Should be run after espfix64 is set up. */
diff --git a/mm/ioremap.c b/mm/ioremap.c
index 3f4d36f9745a..c67f91164401 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -16,49 +16,16 @@
#include "pgalloc-track.h"
#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
-static int __read_mostly ioremap_p4d_capable;
-static int __read_mostly ioremap_pud_capable;
-static int __read_mostly ioremap_pmd_capable;
-static int __read_mostly ioremap_huge_disabled;
+static bool __ro_after_init iomap_max_page_shift = PAGE_SHIFT;
static int __init set_nohugeiomap(char *str)
{
- ioremap_huge_disabled = 1;
+ iomap_max_page_shift = P4D_SHIFT;
return 0;
}
early_param("nohugeiomap", set_nohugeiomap);
-
-void __init ioremap_huge_init(void)
-{
- if (!ioremap_huge_disabled) {
- if (arch_ioremap_p4d_supported())
- ioremap_p4d_capable = 1;
- if (arch_ioremap_pud_supported())
- ioremap_pud_capable = 1;
- if (arch_ioremap_pmd_supported())
- ioremap_pmd_capable = 1;
- }
-}
-
-static inline int ioremap_p4d_enabled(void)
-{
- return ioremap_p4d_capable;
-}
-
-static inline int ioremap_pud_enabled(void)
-{
- return ioremap_pud_capable;
-}
-
-static inline int ioremap_pmd_enabled(void)
-{
- return ioremap_pmd_capable;
-}
-
-#else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */
-static inline int ioremap_p4d_enabled(void) { return 0; }
-static inline int ioremap_pud_enabled(void) { return 0; }
-static inline int ioremap_pmd_enabled(void) { return 0; }
+#else /* CONFIG_HAVE_ARCH_HUGE_VMAP */
+static const bool iomap_max_page_shift = PAGE_SHIFT;
#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
@@ -82,9 +49,13 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
}
static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot)
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
{
- if (!ioremap_pmd_enabled())
+ if (max_page_shift < PMD_SHIFT)
+ return 0;
+
+ if (!arch_vmap_pmd_supported(prot))
return 0;
if ((end - addr) != PMD_SIZE)
@@ -104,7 +75,7 @@ static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end,
static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
+ unsigned int max_page_shift, pgtbl_mod_mask *mask)
{
pmd_t *pmd;
unsigned long next;
@@ -115,7 +86,7 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
do {
next = pmd_addr_end(addr, end);
- if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot)) {
+ if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) {
*mask |= PGTBL_PMD_MODIFIED;
continue;
}
@@ -127,9 +98,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
}
static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot)
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
{
- if (!ioremap_pud_enabled())
+ if (max_page_shift < PUD_SHIFT)
+ return 0;
+
+ if (!arch_vmap_pud_supported(prot))
return 0;
if ((end - addr) != PUD_SIZE)
@@ -149,7 +124,7 @@ static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end,
static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
+ unsigned int max_page_shift, pgtbl_mod_mask *mask)
{
pud_t *pud;
unsigned long next;
@@ -160,21 +135,25 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
do {
next = pud_addr_end(addr, end);
- if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot)) {
+ if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) {
*mask |= PGTBL_PUD_MODIFIED;
continue;
}
- if (vmap_pmd_range(pud, addr, next, phys_addr, prot, mask))
+ if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask))
return -ENOMEM;
} while (pud++, phys_addr += (next - addr), addr = next, addr != end);
return 0;
}
static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot)
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
{
- if (!ioremap_p4d_enabled())
+ if (max_page_shift < P4D_SHIFT)
+ return 0;
+
+ if (!arch_vmap_p4d_supported(prot))
return 0;
if ((end - addr) != P4D_SIZE)
@@ -194,7 +173,7 @@ static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end,
static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
+ unsigned int max_page_shift, pgtbl_mod_mask *mask)
{
p4d_t *p4d;
unsigned long next;
@@ -205,19 +184,20 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
do {
next = p4d_addr_end(addr, end);
- if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot)) {
+ if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) {
*mask |= PGTBL_P4D_MODIFIED;
continue;
}
- if (vmap_pud_range(p4d, addr, next, phys_addr, prot, mask))
+ if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask))
return -ENOMEM;
} while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
return 0;
}
static int vmap_range(unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot)
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
{
pgd_t *pgd;
unsigned long start;
@@ -232,7 +212,7 @@ static int vmap_range(unsigned long addr, unsigned long end,
pgd = pgd_offset_k(addr);
do {
next = pgd_addr_end(addr, end);
- err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, &mask);
+ err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask);
if (err)
break;
} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
@@ -248,7 +228,7 @@ static int vmap_range(unsigned long addr, unsigned long end,
int ioremap_page_range(unsigned long addr,
unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
{
- return vmap_range(addr, end, phys_addr, prot);
+ return vmap_range(addr, end, phys_addr, prot, iomap_max_page_shift);
}
#ifdef CONFIG_GENERIC_IOREMAP
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 06/12] powerpc: inline huge vmap supported functions
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (4 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-25 8:42 ` Christophe Leroy
2021-01-24 8:22 ` [PATCH v10 07/12] arm64: " Nicholas Piggin
` (5 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong, Michael Ellerman
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: linuxppc-dev@lists.ozlabs.org
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/powerpc/include/asm/vmalloc.h | 19 ++++++++++++++++---
arch/powerpc/mm/book3s64/radix_pgtable.c | 21 ---------------------
2 files changed, 16 insertions(+), 24 deletions(-)
diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h
index 105abb73f075..3f0c153befb0 100644
--- a/arch/powerpc/include/asm/vmalloc.h
+++ b/arch/powerpc/include/asm/vmalloc.h
@@ -1,12 +1,25 @@
#ifndef _ASM_POWERPC_VMALLOC_H
#define _ASM_POWERPC_VMALLOC_H
+#include <asm/mmu.h>
#include <asm/page.h>
#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
-bool arch_vmap_p4d_supported(pgprot_t prot);
-bool arch_vmap_pud_supported(pgprot_t prot);
-bool arch_vmap_pmd_supported(pgprot_t prot);
+static inline bool arch_vmap_p4d_supported(pgprot_t prot)
+{
+ return false;
+}
+
+static inline bool arch_vmap_pud_supported(pgprot_t prot)
+{
+ /* HPT does not cope with large pages in the vmalloc area */
+ return radix_enabled();
+}
+
+static inline bool arch_vmap_pmd_supported(pgprot_t prot)
+{
+ return radix_enabled();
+}
#endif
#endif /* _ASM_POWERPC_VMALLOC_H */
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 743807fc210f..8da62afccee5 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1082,22 +1082,6 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
set_pte_at(mm, addr, ptep, pte);
}
-bool arch_vmap_pud_supported(pgprot_t prot)
-{
- /* HPT does not cope with large pages in the vmalloc area */
- return radix_enabled();
-}
-
-bool arch_vmap_pmd_supported(pgprot_t prot)
-{
- return radix_enabled();
-}
-
-int p4d_free_pud_page(p4d_t *p4d, unsigned long addr)
-{
- return 0;
-}
-
int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
{
pte_t *ptep = (pte_t *)pud;
@@ -1181,8 +1165,3 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
return 1;
}
-
-bool arch_vmap_p4d_supported(pgprot_t prot)
-{
- return false;
-}
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 07/12] arm64: inline huge vmap supported functions
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (5 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 06/12] powerpc: inline huge vmap supported functions Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 08/12] x86: " Nicholas Piggin
` (4 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong, Catalin Marinas, Will Deacon,
linux-arm-kernel
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/arm64/include/asm/vmalloc.h | 23 ++++++++++++++++++++---
arch/arm64/mm/mmu.c | 26 --------------------------
2 files changed, 20 insertions(+), 29 deletions(-)
diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index 597b40405319..fc9a12d6cc1a 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -4,9 +4,26 @@
#include <asm/page.h>
#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
-bool arch_vmap_p4d_supported(pgprot_t prot);
-bool arch_vmap_pud_supported(pgprot_t prot);
-bool arch_vmap_pmd_supported(pgprot_t prot);
+static inline bool arch_vmap_p4d_supported(pgprot_t prot)
+{
+ return false;
+}
+
+static inline bool arch_vmap_pud_supported(pgprot_t prot)
+{
+ /*
+ * Only 4k granule supports level 1 block mappings.
+ * SW table walks can't handle removal of intermediate entries.
+ */
+ return IS_ENABLED(CONFIG_ARM64_4K_PAGES) &&
+ !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
+}
+
+static inline bool arch_vmap_pmd_supported(pgprot_t prot)
+{
+ /* See arch_vmap_pud_supported() */
+ return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
+}
#endif
#endif /* _ASM_ARM64_VMALLOC_H */
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f6614c378792..ab9ba7c36dae 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1313,27 +1313,6 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
return dt_virt;
}
-bool arch_vmap_p4d_supported(pgprot_t prot)
-{
- return false;
-}
-
-bool arch_vmap_pud_supported(pgprot_t prot);
-{
- /*
- * Only 4k granule supports level 1 block mappings.
- * SW table walks can't handle removal of intermediate entries.
- */
- return IS_ENABLED(CONFIG_ARM64_4K_PAGES) &&
- !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
-}
-
-bool arch_vmap_pmd_supported(pgprot_t prot)
-{
- /* See arch_vmap_pud_supported() */
- return !IS_ENABLED(CONFIG_PTDUMP_DEBUGFS);
-}
-
int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
{
pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot));
@@ -1425,11 +1404,6 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
return 1;
}
-int p4d_free_pud_page(p4d_t *p4d, unsigned long addr)
-{
- return 0; /* Don't attempt a block mapping */
-}
-
#ifdef CONFIG_MEMORY_HOTPLUG
static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
{
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 08/12] x86: inline huge vmap supported functions
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (6 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 07/12] arm64: " Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c Nicholas Piggin
` (3 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, x86, H. Peter Anvin
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/x86/include/asm/vmalloc.h | 22 +++++++++++++++++++---
arch/x86/mm/ioremap.c | 21 ---------------------
arch/x86/mm/pgtable.c | 13 -------------
3 files changed, 19 insertions(+), 37 deletions(-)
diff --git a/arch/x86/include/asm/vmalloc.h b/arch/x86/include/asm/vmalloc.h
index 094ea2b565f3..e714b00fc0ca 100644
--- a/arch/x86/include/asm/vmalloc.h
+++ b/arch/x86/include/asm/vmalloc.h
@@ -1,13 +1,29 @@
#ifndef _ASM_X86_VMALLOC_H
#define _ASM_X86_VMALLOC_H
+#include <asm/cpufeature.h>
#include <asm/page.h>
#include <asm/pgtable_areas.h>
#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
-bool arch_vmap_p4d_supported(pgprot_t prot);
-bool arch_vmap_pud_supported(pgprot_t prot);
-bool arch_vmap_pmd_supported(pgprot_t prot);
+static inline bool arch_vmap_p4d_supported(pgprot_t prot)
+{
+ return false;
+}
+
+static inline bool arch_vmap_pud_supported(pgprot_t prot)
+{
+#ifdef CONFIG_X86_64
+ return boot_cpu_has(X86_FEATURE_GBPAGES);
+#else
+ return false;
+#endif
+}
+
+static inline bool arch_vmap_pmd_supported(pgprot_t prot)
+{
+ return boot_cpu_has(X86_FEATURE_PSE);
+}
#endif
#endif /* _ASM_X86_VMALLOC_H */
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index fbaf0c447986..12c686c65ea9 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -481,27 +481,6 @@ void iounmap(volatile void __iomem *addr)
}
EXPORT_SYMBOL(iounmap);
-#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
-bool arch_vmap_p4d_supported(pgprot_t prot)
-{
- return false;
-}
-
-bool arch_vmap_pud_supported(pgprot_t prot)
-{
-#ifdef CONFIG_X86_64
- return boot_cpu_has(X86_FEATURE_GBPAGES);
-#else
- return false;
-#endif
-}
-
-bool arch_vmap_pmd_supported(pgprot_t prot)
-{
- return boot_cpu_has(X86_FEATURE_PSE);
-}
-#endif
-
/*
* Convert a physical pointer to a virtual kernel pointer for /dev/mem
* access
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index f6a9e2e36642..d27cf69e811d 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -780,14 +780,6 @@ int pmd_clear_huge(pmd_t *pmd)
return 0;
}
-/*
- * Until we support 512GB pages, skip them in the vmap area.
- */
-int p4d_free_pud_page(p4d_t *p4d, unsigned long addr)
-{
- return 0;
-}
-
#ifdef CONFIG_X86_64
/**
* pud_free_pmd_page - Clear pud entry and free pmd page.
@@ -861,11 +853,6 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
#else /* !CONFIG_X86_64 */
-int pud_free_pmd_page(pud_t *pud, unsigned long addr)
-{
- return pud_none(*pud);
-}
-
/*
* Disable free page handling on x86-PAE. This assures that ioremap()
* does not update sync'd pmd entries. See vmalloc_sync_one().
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (7 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 08/12] x86: " Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 14:49 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 10/12] mm/vmalloc: add vmap_range_noflush variant Nicholas Piggin
` (2 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
This is a generic kernel virtual memory mapper, not specific to ioremap.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
include/linux/vmalloc.h | 3 +
mm/ioremap.c | 197 ----------------------------------------
mm/vmalloc.c | 196 +++++++++++++++++++++++++++++++++++++++
3 files changed, 199 insertions(+), 197 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 00bd62bd701e..40649c4bb5a2 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -178,6 +178,9 @@ extern struct vm_struct *remove_vm_area(const void *addr);
extern struct vm_struct *find_vm_area(const void *addr);
#ifdef CONFIG_MMU
+int vmap_range(unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift);
extern int map_kernel_range_noflush(unsigned long start, unsigned long size,
pgprot_t prot, struct page **pages);
int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot,
diff --git a/mm/ioremap.c b/mm/ioremap.c
index c67f91164401..d1dcc7e744ac 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -28,203 +28,6 @@ early_param("nohugeiomap", set_nohugeiomap);
static const bool iomap_max_page_shift = PAGE_SHIFT;
#endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
-static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- pgtbl_mod_mask *mask)
-{
- pte_t *pte;
- u64 pfn;
-
- pfn = phys_addr >> PAGE_SHIFT;
- pte = pte_alloc_kernel_track(pmd, addr, mask);
- if (!pte)
- return -ENOMEM;
- do {
- BUG_ON(!pte_none(*pte));
- set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot));
- pfn++;
- } while (pte++, addr += PAGE_SIZE, addr != end);
- *mask |= PGTBL_PTE_MODIFIED;
- return 0;
-}
-
-static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- unsigned int max_page_shift)
-{
- if (max_page_shift < PMD_SHIFT)
- return 0;
-
- if (!arch_vmap_pmd_supported(prot))
- return 0;
-
- if ((end - addr) != PMD_SIZE)
- return 0;
-
- if (!IS_ALIGNED(addr, PMD_SIZE))
- return 0;
-
- if (!IS_ALIGNED(phys_addr, PMD_SIZE))
- return 0;
-
- if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
- return 0;
-
- return pmd_set_huge(pmd, phys_addr, prot);
-}
-
-static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- unsigned int max_page_shift, pgtbl_mod_mask *mask)
-{
- pmd_t *pmd;
- unsigned long next;
-
- pmd = pmd_alloc_track(&init_mm, pud, addr, mask);
- if (!pmd)
- return -ENOMEM;
- do {
- next = pmd_addr_end(addr, end);
-
- if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) {
- *mask |= PGTBL_PMD_MODIFIED;
- continue;
- }
-
- if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask))
- return -ENOMEM;
- } while (pmd++, phys_addr += (next - addr), addr = next, addr != end);
- return 0;
-}
-
-static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- unsigned int max_page_shift)
-{
- if (max_page_shift < PUD_SHIFT)
- return 0;
-
- if (!arch_vmap_pud_supported(prot))
- return 0;
-
- if ((end - addr) != PUD_SIZE)
- return 0;
-
- if (!IS_ALIGNED(addr, PUD_SIZE))
- return 0;
-
- if (!IS_ALIGNED(phys_addr, PUD_SIZE))
- return 0;
-
- if (pud_present(*pud) && !pud_free_pmd_page(pud, addr))
- return 0;
-
- return pud_set_huge(pud, phys_addr, prot);
-}
-
-static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- unsigned int max_page_shift, pgtbl_mod_mask *mask)
-{
- pud_t *pud;
- unsigned long next;
-
- pud = pud_alloc_track(&init_mm, p4d, addr, mask);
- if (!pud)
- return -ENOMEM;
- do {
- next = pud_addr_end(addr, end);
-
- if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) {
- *mask |= PGTBL_PUD_MODIFIED;
- continue;
- }
-
- if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask))
- return -ENOMEM;
- } while (pud++, phys_addr += (next - addr), addr = next, addr != end);
- return 0;
-}
-
-static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- unsigned int max_page_shift)
-{
- if (max_page_shift < P4D_SHIFT)
- return 0;
-
- if (!arch_vmap_p4d_supported(prot))
- return 0;
-
- if ((end - addr) != P4D_SIZE)
- return 0;
-
- if (!IS_ALIGNED(addr, P4D_SIZE))
- return 0;
-
- if (!IS_ALIGNED(phys_addr, P4D_SIZE))
- return 0;
-
- if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr))
- return 0;
-
- return p4d_set_huge(p4d, phys_addr, prot);
-}
-
-static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- unsigned int max_page_shift, pgtbl_mod_mask *mask)
-{
- p4d_t *p4d;
- unsigned long next;
-
- p4d = p4d_alloc_track(&init_mm, pgd, addr, mask);
- if (!p4d)
- return -ENOMEM;
- do {
- next = p4d_addr_end(addr, end);
-
- if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) {
- *mask |= PGTBL_P4D_MODIFIED;
- continue;
- }
-
- if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask))
- return -ENOMEM;
- } while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
- return 0;
-}
-
-static int vmap_range(unsigned long addr, unsigned long end,
- phys_addr_t phys_addr, pgprot_t prot,
- unsigned int max_page_shift)
-{
- pgd_t *pgd;
- unsigned long start;
- unsigned long next;
- int err;
- pgtbl_mod_mask mask = 0;
-
- might_sleep();
- BUG_ON(addr >= end);
-
- start = addr;
- pgd = pgd_offset_k(addr);
- do {
- next = pgd_addr_end(addr, end);
- err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask);
- if (err)
- break;
- } while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
-
- flush_cache_vmap(start, end);
-
- if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
- arch_sync_kernel_mappings(start, end);
-
- return err;
-}
-
int ioremap_page_range(unsigned long addr,
unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
{
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 7f2f36116980..5d79148b7fa7 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -68,6 +68,202 @@ static void free_work(struct work_struct *w)
}
/*** Page table manipulation functions ***/
+static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ pgtbl_mod_mask *mask)
+{
+ pte_t *pte;
+ u64 pfn;
+
+ pfn = phys_addr >> PAGE_SHIFT;
+ pte = pte_alloc_kernel_track(pmd, addr, mask);
+ if (!pte)
+ return -ENOMEM;
+ do {
+ BUG_ON(!pte_none(*pte));
+ set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot));
+ pfn++;
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+ *mask |= PGTBL_PTE_MODIFIED;
+ return 0;
+}
+
+static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
+{
+ if (max_page_shift < PMD_SHIFT)
+ return 0;
+
+ if (!arch_vmap_pmd_supported(prot))
+ return 0;
+
+ if ((end - addr) != PMD_SIZE)
+ return 0;
+
+ if (!IS_ALIGNED(addr, PMD_SIZE))
+ return 0;
+
+ if (!IS_ALIGNED(phys_addr, PMD_SIZE))
+ return 0;
+
+ if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
+ return 0;
+
+ return pmd_set_huge(pmd, phys_addr, prot);
+}
+
+static int vmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift, pgtbl_mod_mask *mask)
+{
+ pmd_t *pmd;
+ unsigned long next;
+
+ pmd = pmd_alloc_track(&init_mm, pud, addr, mask);
+ if (!pmd)
+ return -ENOMEM;
+ do {
+ next = pmd_addr_end(addr, end);
+
+ if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot, max_page_shift)) {
+ *mask |= PGTBL_PMD_MODIFIED;
+ continue;
+ }
+
+ if (vmap_pte_range(pmd, addr, next, phys_addr, prot, mask))
+ return -ENOMEM;
+ } while (pmd++, phys_addr += (next - addr), addr = next, addr != end);
+ return 0;
+}
+
+static int vmap_try_huge_pud(pud_t *pud, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
+{
+ if (max_page_shift < PUD_SHIFT)
+ return 0;
+
+ if (!arch_vmap_pud_supported(prot))
+ return 0;
+
+ if ((end - addr) != PUD_SIZE)
+ return 0;
+
+ if (!IS_ALIGNED(addr, PUD_SIZE))
+ return 0;
+
+ if (!IS_ALIGNED(phys_addr, PUD_SIZE))
+ return 0;
+
+ if (pud_present(*pud) && !pud_free_pmd_page(pud, addr))
+ return 0;
+
+ return pud_set_huge(pud, phys_addr, prot);
+}
+
+static int vmap_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift, pgtbl_mod_mask *mask)
+{
+ pud_t *pud;
+ unsigned long next;
+
+ pud = pud_alloc_track(&init_mm, p4d, addr, mask);
+ if (!pud)
+ return -ENOMEM;
+ do {
+ next = pud_addr_end(addr, end);
+
+ if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot, max_page_shift)) {
+ *mask |= PGTBL_PUD_MODIFIED;
+ continue;
+ }
+
+ if (vmap_pmd_range(pud, addr, next, phys_addr, prot, max_page_shift, mask))
+ return -ENOMEM;
+ } while (pud++, phys_addr += (next - addr), addr = next, addr != end);
+ return 0;
+}
+
+static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
+{
+ if (max_page_shift < P4D_SHIFT)
+ return 0;
+
+ if (!arch_vmap_p4d_supported(prot))
+ return 0;
+
+ if ((end - addr) != P4D_SIZE)
+ return 0;
+
+ if (!IS_ALIGNED(addr, P4D_SIZE))
+ return 0;
+
+ if (!IS_ALIGNED(phys_addr, P4D_SIZE))
+ return 0;
+
+ if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr))
+ return 0;
+
+ return p4d_set_huge(p4d, phys_addr, prot);
+}
+
+static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift, pgtbl_mod_mask *mask)
+{
+ p4d_t *p4d;
+ unsigned long next;
+
+ p4d = p4d_alloc_track(&init_mm, pgd, addr, mask);
+ if (!p4d)
+ return -ENOMEM;
+ do {
+ next = p4d_addr_end(addr, end);
+
+ if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot, max_page_shift)) {
+ *mask |= PGTBL_P4D_MODIFIED;
+ continue;
+ }
+
+ if (vmap_pud_range(p4d, addr, next, phys_addr, prot, max_page_shift, mask))
+ return -ENOMEM;
+ } while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
+ return 0;
+}
+
+int vmap_range(unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
+{
+ pgd_t *pgd;
+ unsigned long start;
+ unsigned long next;
+ int err;
+ pgtbl_mod_mask mask = 0;
+
+ might_sleep();
+ BUG_ON(addr >= end);
+
+ start = addr;
+ pgd = pgd_offset_k(addr);
+ do {
+ next = pgd_addr_end(addr, end);
+ err = vmap_p4d_range(pgd, addr, next, phys_addr, prot, max_page_shift, &mask);
+ if (err)
+ break;
+ } while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
+
+ flush_cache_vmap(start, end);
+
+ if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
+ arch_sync_kernel_mappings(start, end);
+
+ return err;
+}
static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pgtbl_mod_mask *mask)
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 10/12] mm/vmalloc: add vmap_range_noflush variant
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (8 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 14:51 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 12/12] powerpc/64s/radix: Enable huge " Nicholas Piggin
11 siblings, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
As a side-effect, the order of flush_cache_vmap() and
arch_sync_kernel_mappings() calls are switched, but that now matches
the other callers in this file.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
mm/vmalloc.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 5d79148b7fa7..0377e1d059e5 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -235,7 +235,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end,
return 0;
}
-int vmap_range(unsigned long addr, unsigned long end,
+static int vmap_range_noflush(unsigned long addr, unsigned long end,
phys_addr_t phys_addr, pgprot_t prot,
unsigned int max_page_shift)
{
@@ -257,14 +257,24 @@ int vmap_range(unsigned long addr, unsigned long end,
break;
} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
- flush_cache_vmap(start, end);
-
if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
arch_sync_kernel_mappings(start, end);
return err;
}
+int vmap_range(unsigned long addr, unsigned long end,
+ phys_addr_t phys_addr, pgprot_t prot,
+ unsigned int max_page_shift)
+{
+ int err;
+
+ err = vmap_range_noflush(addr, end, phys_addr, prot, max_page_shift);
+ flush_cache_vmap(addr, end);
+
+ return err;
+}
+
static void vunmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pgtbl_mod_mask *mask)
{
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (9 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 10/12] mm/vmalloc: add vmap_range_noflush variant Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
2021-01-24 15:07 ` Christoph Hellwig
2021-01-25 9:14 ` Christophe Leroy
2021-01-24 8:22 ` [PATCH v10 12/12] powerpc/64s/radix: Enable huge " Nicholas Piggin
11 siblings, 2 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
supports PMD sized vmap mappings.
vmalloc will attempt to allocate PMD-sized pages if allocating PMD size
or larger, and fall back to small pages if that was unsuccessful.
Architectures must ensure that any arch specific vmalloc allocations
that require PAGE_SIZE mappings (e.g., module allocations vs strict
module rwx) use the VM_NOHUGE flag to inhibit larger mappings.
When hugepage vmalloc mappings are enabled in the next patch, this
reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node
POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%.
This can result in more internal fragmentation and memory overhead for a
given allocation, an option nohugevmalloc is added to disable at boot.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
arch/Kconfig | 10 +++
include/linux/vmalloc.h | 18 ++++
mm/page_alloc.c | 5 +-
mm/vmalloc.c | 192 ++++++++++++++++++++++++++++++----------
4 files changed, 177 insertions(+), 48 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 24862d15f3a3..f87feb616184 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -724,6 +724,16 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
config HAVE_ARCH_HUGE_VMAP
bool
+config HAVE_ARCH_HUGE_VMALLOC
+ depends on HAVE_ARCH_HUGE_VMAP
+ bool
+ help
+ Archs that select this would be capable of PMD-sized vmaps (i.e.,
+ arch_vmap_pmd_supported() returns true), and they must make no
+ assumptions that vmalloc memory is mapped with PAGE_SIZE ptes. The
+ VM_NOHUGE flag can be used to prohibit arch-specific allocations from
+ using hugepages to help with this (e.g., modules may require it).
+
config ARCH_WANT_HUGE_PMD_SHARE
bool
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 40649c4bb5a2..2ba023daf188 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -25,6 +25,7 @@ struct notifier_block; /* in notifier.h */
#define VM_NO_GUARD 0x00000040 /* don't add guard page */
#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
#define VM_MAP_PUT_PAGES 0x00000100 /* put pages and free array in vfree */
+#define VM_NOHUGE 0x00000200 /* force PAGE_SIZE pte mapping */
/*
* VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
@@ -59,6 +60,7 @@ struct vm_struct {
unsigned long size;
unsigned long flags;
struct page **pages;
+ unsigned int page_order;
unsigned int nr_pages;
phys_addr_t phys_addr;
const void *caller;
@@ -194,6 +196,18 @@ static inline void set_vm_flush_reset_perms(void *addr)
if (vm)
vm->flags |= VM_FLUSH_RESET_PERMS;
}
+
+static inline bool is_vm_area_hugepages(const void *addr)
+{
+ /*
+ * This may not 100% tell if the area is mapped with > PAGE_SIZE
+ * page table entries, if for some reason the architecture indicates
+ * larger sizes are available but decides not to use them, nothing
+ * prevents that. This only indicates the size of the physical page
+ * allocated in the vmalloc layer.
+ */
+ return (find_vm_area(addr)->page_order > 0);
+}
#else
static inline int
map_kernel_range_noflush(unsigned long start, unsigned long size,
@@ -210,6 +224,10 @@ unmap_kernel_range_noflush(unsigned long addr, unsigned long size)
static inline void set_vm_flush_reset_perms(void *addr)
{
}
+static inline bool is_vm_area_hugepages(const void *addr)
+{
+ return false;
+}
#endif
/* for /dev/kmem */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 027f6481ba59..b7a9661fa232 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -72,6 +72,7 @@
#include <linux/padata.h>
#include <linux/khugepaged.h>
#include <linux/buffer_head.h>
+#include <linux/vmalloc.h>
#include <asm/sections.h>
#include <asm/tlbflush.h>
@@ -8238,6 +8239,7 @@ void *__init alloc_large_system_hash(const char *tablename,
void *table = NULL;
gfp_t gfp_flags;
bool virt;
+ bool huge;
/* allow the kernel cmdline to have a say */
if (!numentries) {
@@ -8305,6 +8307,7 @@ void *__init alloc_large_system_hash(const char *tablename,
} else if (get_order(size) >= MAX_ORDER || hashdist) {
table = __vmalloc(size, gfp_flags);
virt = true;
+ huge = is_vm_area_hugepages(table);
} else {
/*
* If bucketsize is not a power-of-two, we may free
@@ -8321,7 +8324,7 @@ void *__init alloc_large_system_hash(const char *tablename,
pr_info("%s hash table entries: %ld (order: %d, %lu bytes, %s)\n",
tablename, 1UL << log2qty, ilog2(size) - PAGE_SHIFT, size,
- virt ? "vmalloc" : "linear");
+ virt ? (huge ? "vmalloc hugepage" : "vmalloc") : "linear");
if (_hash_shift)
*_hash_shift = log2qty;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0377e1d059e5..eef61e0f5170 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -42,6 +42,19 @@
#include "internal.h"
#include "pgalloc-track.h"
+#ifdef CONFIG_HAVE_ARCH_HUGE_VMALLOC
+static bool __ro_after_init vmap_allow_huge = true;
+
+static int __init set_nohugevmalloc(char *str)
+{
+ vmap_allow_huge = false;
+ return 0;
+}
+early_param("nohugevmalloc", set_nohugevmalloc);
+#else /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */
+static const bool vmap_allow_huge = false;
+#endif /* CONFIG_HAVE_ARCH_HUGE_VMALLOC */
+
bool is_vmalloc_addr(const void *x)
{
unsigned long addr = (unsigned long)x;
@@ -477,31 +490,12 @@ static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr,
return 0;
}
-/**
- * map_kernel_range_noflush - map kernel VM area with the specified pages
- * @addr: start of the VM area to map
- * @size: size of the VM area to map
- * @prot: page protection flags to use
- * @pages: pages to map
- *
- * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should
- * have been allocated using get_vm_area() and its friends.
- *
- * NOTE:
- * This function does NOT do any cache flushing. The caller is responsible for
- * calling flush_cache_vmap() on to-be-mapped areas before calling this
- * function.
- *
- * RETURNS:
- * 0 on success, -errno on failure.
- */
-int map_kernel_range_noflush(unsigned long addr, unsigned long size,
- pgprot_t prot, struct page **pages)
+static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end,
+ pgprot_t prot, struct page **pages)
{
unsigned long start = addr;
- unsigned long end = addr + size;
- unsigned long next;
pgd_t *pgd;
+ unsigned long next;
int err = 0;
int nr = 0;
pgtbl_mod_mask mask = 0;
@@ -523,6 +517,65 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size,
return 0;
}
+static int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
+ pgprot_t prot, struct page **pages, unsigned int page_shift)
+{
+ unsigned int i, nr = (end - addr) >> PAGE_SHIFT;
+
+ WARN_ON(page_shift < PAGE_SHIFT);
+
+ if (page_shift == PAGE_SHIFT)
+ return vmap_small_pages_range_noflush(addr, end, prot, pages);
+
+ for (i = 0; i < nr; i += 1U << (page_shift - PAGE_SHIFT)) {
+ int err;
+
+ err = vmap_range_noflush(addr, addr + (1UL << page_shift),
+ __pa(page_address(pages[i])), prot,
+ page_shift);
+ if (err)
+ return err;
+
+ addr += 1UL << page_shift;
+ }
+
+ return 0;
+}
+
+static int vmap_pages_range(unsigned long addr, unsigned long end,
+ pgprot_t prot, struct page **pages, unsigned int page_shift)
+{
+ int err;
+
+ err = vmap_pages_range_noflush(addr, end, prot, pages, page_shift);
+ flush_cache_vmap(addr, end);
+ return err;
+}
+
+/**
+ * map_kernel_range_noflush - map kernel VM area with the specified pages
+ * @addr: start of the VM area to map
+ * @size: size of the VM area to map
+ * @prot: page protection flags to use
+ * @pages: pages to map
+ *
+ * Map PFN_UP(@size) pages at @addr. The VM area @addr and @size specify should
+ * have been allocated using get_vm_area() and its friends.
+ *
+ * NOTE:
+ * This function does NOT do any cache flushing. The caller is responsible for
+ * calling flush_cache_vmap() on to-be-mapped areas before calling this
+ * function.
+ *
+ * RETURNS:
+ * 0 on success, -errno on failure.
+ */
+int map_kernel_range_noflush(unsigned long addr, unsigned long size,
+ pgprot_t prot, struct page **pages)
+{
+ return vmap_pages_range_noflush(addr, addr + size, prot, pages, PAGE_SHIFT);
+}
+
int map_kernel_range(unsigned long start, unsigned long size, pgprot_t prot,
struct page **pages)
{
@@ -2416,6 +2469,7 @@ static inline void set_area_direct_map(const struct vm_struct *area,
{
int i;
+ /* HUGE_VMALLOC passes small pages to set_direct_map */
for (i = 0; i < area->nr_pages; i++)
if (page_address(area->pages[i]))
set_direct_map(area->pages[i]);
@@ -2449,11 +2503,12 @@ static void vm_remove_mappings(struct vm_struct *area, int deallocate_pages)
* map. Find the start and end range of the direct mappings to make sure
* the vm_unmap_aliases() flush includes the direct map.
*/
- for (i = 0; i < area->nr_pages; i++) {
+ for (i = 0; i < area->nr_pages; i += 1U << area->page_order) {
unsigned long addr = (unsigned long)page_address(area->pages[i]);
if (addr) {
+ unsigned long page_size = PAGE_SIZE << area->page_order;
start = min(addr, start);
- end = max(addr + PAGE_SIZE, end);
+ end = max(addr + page_size, end);
flush_dmap = 1;
}
}
@@ -2496,11 +2551,11 @@ static void __vunmap(const void *addr, int deallocate_pages)
if (deallocate_pages) {
int i;
- for (i = 0; i < area->nr_pages; i++) {
+ for (i = 0; i < area->nr_pages; i += 1U << area->page_order) {
struct page *page = area->pages[i];
BUG_ON(!page);
- __free_pages(page, 0);
+ __free_pages(page, area->page_order);
}
atomic_long_sub(area->nr_pages, &nr_vmalloc_pages);
@@ -2691,15 +2746,18 @@ EXPORT_SYMBOL_GPL(vmap_pfn);
#endif /* CONFIG_VMAP_PFN */
static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
- pgprot_t prot, int node)
+ pgprot_t prot, unsigned int page_shift,
+ int node)
{
const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
- unsigned int nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
- unsigned long array_size;
- unsigned int i;
+ unsigned int page_order = page_shift - PAGE_SHIFT;
+ unsigned long addr = (unsigned long)area->addr;
+ unsigned long size = get_vm_area_size(area);
+ unsigned int nr_small_pages = size >> PAGE_SHIFT;
struct page **pages;
+ unsigned int i;
- array_size = (unsigned long)nr_pages * sizeof(struct page *);
+ array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
gfp_mask |= __GFP_NOWARN;
if (!(gfp_mask & (GFP_DMA | GFP_DMA32)))
gfp_mask |= __GFP_HIGHMEM;
@@ -2718,30 +2776,35 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
}
area->pages = pages;
- area->nr_pages = nr_pages;
+ area->nr_pages = nr_small_pages;
+ area->page_order = page_order;
- for (i = 0; i < area->nr_pages; i++) {
+ /*
+ * Careful, we allocate and map page_order pages, but tracking is done
+ * per PAGE_SIZE page so as to keep the vm_struct APIs independent of
+ * the physical/mapped size.
+ */
+ for (i = 0; i < area->nr_pages; i += 1U << page_order) {
struct page *page;
+ int p;
- if (node == NUMA_NO_NODE)
- page = alloc_page(gfp_mask);
- else
- page = alloc_pages_node(node, gfp_mask, 0);
-
+ page = alloc_pages_node(node, gfp_mask, page_order);
if (unlikely(!page)) {
/* Successfully allocated i pages, free them in __vfree() */
area->nr_pages = i;
atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
goto fail;
}
- area->pages[i] = page;
+
+ for (p = 0; p < (1U << page_order); p++)
+ area->pages[i + p] = page + p;
+
if (gfpflags_allow_blocking(gfp_mask))
cond_resched();
}
atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
- if (map_kernel_range((unsigned long)area->addr, get_vm_area_size(area),
- prot, pages) < 0)
+ if (vmap_pages_range(addr, addr + size, prot, pages, page_shift) < 0)
goto fail;
return area->addr;
@@ -2749,7 +2812,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
fail:
warn_alloc(gfp_mask, NULL,
"vmalloc: allocation failure, allocated %ld of %ld bytes",
- (area->nr_pages*PAGE_SIZE), area->size);
+ (area->nr_pages*PAGE_SIZE), size);
__vfree(area->addr);
return NULL;
}
@@ -2780,19 +2843,44 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
struct vm_struct *area;
void *addr;
unsigned long real_size = size;
+ unsigned long real_align = align;
+ unsigned int shift = PAGE_SHIFT;
- size = PAGE_ALIGN(size);
if (!size || (size >> PAGE_SHIFT) > totalram_pages())
goto fail;
- area = __get_vm_area_node(real_size, align, VM_ALLOC | VM_UNINITIALIZED |
+ if (vmap_allow_huge && !(vm_flags & VM_NOHUGE) &&
+ arch_vmap_pmd_supported(prot) &&
+ (pgprot_val(prot) == pgprot_val(PAGE_KERNEL))) {
+ unsigned long size_per_node;
+
+ /*
+ * Try huge pages. Only try for PAGE_KERNEL allocations,
+ * others like modules don't yet expect huge pages in
+ * their allocations due to apply_to_page_range not
+ * supporting them.
+ */
+
+ size_per_node = size;
+ if (node == NUMA_NO_NODE)
+ size_per_node /= num_online_nodes();
+ if (size_per_node >= PMD_SIZE) {
+ shift = PMD_SHIFT;
+ align = max(real_align, 1UL << shift);
+ size = ALIGN(real_size, 1UL << shift);
+ }
+ }
+
+again:
+ size = PAGE_ALIGN(size);
+ area = __get_vm_area_node(size, align, VM_ALLOC | VM_UNINITIALIZED |
vm_flags, start, end, node, gfp_mask, caller);
if (!area)
goto fail;
- addr = __vmalloc_area_node(area, gfp_mask, prot, node);
+ addr = __vmalloc_area_node(area, gfp_mask, prot, shift, node);
if (!addr)
- return NULL;
+ goto fail;
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
@@ -2806,8 +2894,18 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
return addr;
fail:
- warn_alloc(gfp_mask, NULL,
+ if (shift > PAGE_SHIFT) {
+ shift = PAGE_SHIFT;
+ align = real_align;
+ size = real_size;
+ goto again;
+ }
+
+ if (!area) {
+ /* Warn for area allocation, page allocations already warn */
+ warn_alloc(gfp_mask, NULL,
"vmalloc: allocation failure: %lu bytes", real_size);
+ }
return NULL;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v10 12/12] powerpc/64s/radix: Enable huge vmalloc mappings
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
` (10 preceding siblings ...)
2021-01-24 8:22 ` [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings Nicholas Piggin
@ 2021-01-24 8:22 ` Nicholas Piggin
11 siblings, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 8:22 UTC (permalink / raw)
To: linux-mm, Andrew Morton
Cc: Nicholas Piggin, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
Documentation/admin-guide/kernel-parameters.txt | 2 ++
arch/powerpc/Kconfig | 1 +
arch/powerpc/kernel/module.c | 13 +++++++++++--
3 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index a10b545c2070..d62df53e5200 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3225,6 +3225,8 @@
nohugeiomap [KNL,X86,PPC,ARM64] Disable kernel huge I/O mappings.
+ nohugevmalloc [PPC] Disable kernel huge vmalloc mappings.
+
nosmt [KNL,S390] Disable symmetric multithreading (SMT).
Equivalent to smt=1.
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 107bb4319e0e..781da6829ab7 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -181,6 +181,7 @@ config PPC
select GENERIC_GETTIMEOFDAY
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU
+ select HAVE_ARCH_HUGE_VMALLOC if HAVE_ARCH_HUGE_VMAP
select HAVE_ARCH_JUMP_LABEL
select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14
select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14
diff --git a/arch/powerpc/kernel/module.c b/arch/powerpc/kernel/module.c
index a211b0253cdb..bc2695eeeb4c 100644
--- a/arch/powerpc/kernel/module.c
+++ b/arch/powerpc/kernel/module.c
@@ -92,8 +92,17 @@ void *module_alloc(unsigned long size)
{
BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR);
- return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, GFP_KERNEL,
- PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
+ /*
+ * Don't do huge page allocations for modules yet until more testing
+ * is done. STRICT_MODULE_RWX may require extra work to support this
+ * too.
+ */
+
+ return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END,
+ GFP_KERNEL,
+ PAGE_KERNEL_EXEC,
+ VM_NOHUGE | VM_FLUSH_RESET_PERMS,
+ NUMA_NO_NODE,
__builtin_return_address(0));
}
#endif
--
2.23.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH v10 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings
2021-01-24 8:22 ` [PATCH v10 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Nicholas Piggin
@ 2021-01-24 11:31 ` Christoph Hellwig
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 11:31 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
On Sun, Jan 24, 2021 at 06:22:19PM +1000, Nicholas Piggin wrote:
> vmalloc_to_page returns NULL for addresses mapped by larger pages[*].
> Whether or not a vmap is huge depends on the architecture details,
> alignments, boot options, etc., which the caller can not be expected
> to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page.
>
> This change teaches vmalloc_to_page about larger pages, and returns
> the struct page that corresponds to the offset within the large page.
> This makes the API agnostic to mapping implementation details.
Maybe enable instead of fix would be better in the subject line?
Otherwise this looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered
2021-01-24 8:22 ` [PATCH v10 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered Nicholas Piggin
@ 2021-01-24 11:32 ` Christoph Hellwig
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 11:32 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
On Sun, Jan 24, 2021 at 06:22:20PM +1000, Nicholas Piggin wrote:
> apply_to_pte_range might mistake a large pte for bad, or treat it as a
> page table, resulting in a crash or corruption. Add a test to warn and
> return error if large entries are found.
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range
2021-01-24 8:22 ` [PATCH v10 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Nicholas Piggin
@ 2021-01-24 11:34 ` Christoph Hellwig
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 11:34 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
On Sun, Jan 24, 2021 at 06:22:21PM +1000, Nicholas Piggin wrote:
> The vmalloc mapper operates on a struct page * array rather than a
> linear physical address, re-name it to make this distinction clear.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range
2021-01-24 8:22 ` [PATCH v10 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range Nicholas Piggin
@ 2021-01-24 11:36 ` Christoph Hellwig
2021-01-24 12:04 ` Nicholas Piggin
0 siblings, 1 reply; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 11:36 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
On Sun, Jan 24, 2021 at 06:22:22PM +1000, Nicholas Piggin wrote:
> This will be used as a generic kernel virtual mapping function, so
> re-name it in preparation.
The new name looks ok, but shouldn't it also move to vmalloc.c with
the more generic name and purpose?
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup
2021-01-24 8:22 ` [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup Nicholas Piggin
@ 2021-01-24 11:40 ` Christoph Hellwig
2021-01-24 12:22 ` Nicholas Piggin
2021-01-25 8:19 ` Christophe Leroy
2021-01-25 8:40 ` Christophe Leroy
1 sibling, 2 replies; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 11:40 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong, Catalin Marinas, Will Deacon,
linux-arm-kernel, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
x86, H. Peter Anvin
> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
> index 2ca708ab9b20..597b40405319 100644
> --- a/arch/arm64/include/asm/vmalloc.h
> +++ b/arch/arm64/include/asm/vmalloc.h
> @@ -1,4 +1,12 @@
> #ifndef _ASM_ARM64_VMALLOC_H
> #define _ASM_ARM64_VMALLOC_H
>
> +#include <asm/page.h>
> +
> +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
> +bool arch_vmap_p4d_supported(pgprot_t prot);
> +bool arch_vmap_pud_supported(pgprot_t prot);
> +bool arch_vmap_pmd_supported(pgprot_t prot);
> +#endif
Shouldn't the be inlines or macros? Also it would be useful
if the architectures would not have to override all functions
but just those that are it actually implements?
Also lots of > 80 char lines in the patch.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range
2021-01-24 11:36 ` Christoph Hellwig
@ 2021-01-24 12:04 ` Nicholas Piggin
0 siblings, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 12:04 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Andrew Morton, Christophe Leroy, Ding Tianhong, Jonathan Cameron,
linux-arch, linux-kernel, linux-mm, linuxppc-dev, Zefan Li,
Rick Edgecombe
Excerpts from Christoph Hellwig's message of January 24, 2021 9:36 pm:
> On Sun, Jan 24, 2021 at 06:22:22PM +1000, Nicholas Piggin wrote:
>> This will be used as a generic kernel virtual mapping function, so
>> re-name it in preparation.
>
> The new name looks ok, but shouldn't it also move to vmalloc.c with
> the more generic name and purpose?
>
Yes, I moved it in a later patch to make reviewing easier. Rename in
this one then the move patch is cut and paste.
Thanks,
Nick
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup
2021-01-24 11:40 ` Christoph Hellwig
@ 2021-01-24 12:22 ` Nicholas Piggin
2021-01-25 8:19 ` Christophe Leroy
1 sibling, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 12:22 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Andrew Morton, Borislav Petkov, Catalin Marinas,
Christophe Leroy, Ding Tianhong, H. Peter Anvin,
Jonathan Cameron, linux-arch, linux-arm-kernel, linux-kernel,
linux-mm, linuxppc-dev, Zefan Li, Ingo Molnar, Rick Edgecombe,
Thomas Gleixner, Will Deacon, x86
Excerpts from Christoph Hellwig's message of January 24, 2021 9:40 pm:
>> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
>> index 2ca708ab9b20..597b40405319 100644
>> --- a/arch/arm64/include/asm/vmalloc.h
>> +++ b/arch/arm64/include/asm/vmalloc.h
>> @@ -1,4 +1,12 @@
>> #ifndef _ASM_ARM64_VMALLOC_H
>> #define _ASM_ARM64_VMALLOC_H
>>
>> +#include <asm/page.h>
>> +
>> +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
>> +bool arch_vmap_p4d_supported(pgprot_t prot);
>> +bool arch_vmap_pud_supported(pgprot_t prot);
>> +bool arch_vmap_pmd_supported(pgprot_t prot);
>> +#endif
>
> Shouldn't the be inlines or macros? Also it would be useful
> if the architectures would not have to override all functions
> but just those that are it actually implements?
It gets better in the next patches. I did it this way again to avoid
moving a lot of code at the same time as changing name / prototype
slightly.
I didn't see individual generic fallbacks being all that useful really
at this scale. I don't mind keeping the explicit false.
> Also lots of > 80 char lines in the patch.
Yeah there's a few, I can reduce those.
Thanks,
Nick
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c
2021-01-24 8:22 ` [PATCH v10 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c Nicholas Piggin
@ 2021-01-24 14:49 ` Christoph Hellwig
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 14:49 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
On Sun, Jan 24, 2021 at 06:22:27PM +1000, Nicholas Piggin wrote:
> This is a generic kernel virtual memory mapper, not specific to ioremap.
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
Although it would be nice if you could fix up the > 80 lines while
you're at it.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 10/12] mm/vmalloc: add vmap_range_noflush variant
2021-01-24 8:22 ` [PATCH v10 10/12] mm/vmalloc: add vmap_range_noflush variant Nicholas Piggin
@ 2021-01-24 14:51 ` Christoph Hellwig
0 siblings, 0 replies; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 14:51 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
On Sun, Jan 24, 2021 at 06:22:28PM +1000, Nicholas Piggin wrote:
> As a side-effect, the order of flush_cache_vmap() and
> arch_sync_kernel_mappings() calls are switched, but that now matches
> the other callers in this file.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-24 8:22 ` [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings Nicholas Piggin
@ 2021-01-24 15:07 ` Christoph Hellwig
2021-01-24 18:06 ` Randy Dunlap
2021-01-24 23:17 ` Nicholas Piggin
2021-01-25 9:14 ` Christophe Leroy
1 sibling, 2 replies; 34+ messages in thread
From: Christoph Hellwig @ 2021-01-24 15:07 UTC (permalink / raw)
To: Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christoph Hellwig, Christophe Leroy,
Rick Edgecombe, Ding Tianhong
On Sun, Jan 24, 2021 at 06:22:29PM +1000, Nicholas Piggin wrote:
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 24862d15f3a3..f87feb616184 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -724,6 +724,16 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> config HAVE_ARCH_HUGE_VMAP
> bool
>
> +config HAVE_ARCH_HUGE_VMALLOC
> + depends on HAVE_ARCH_HUGE_VMAP
> + bool
> + help
> + Archs that select this would be capable of PMD-sized vmaps (i.e.,
> + arch_vmap_pmd_supported() returns true), and they must make no
> + assumptions that vmalloc memory is mapped with PAGE_SIZE ptes. The
> + VM_NOHUGE flag can be used to prohibit arch-specific allocations from
> + using hugepages to help with this (e.g., modules may require it).
help texts don't make sense for options that aren't user visible.
More importantly, is there any good reason to keep the option and not
just go the extra step and enable huge page vmalloc for arm64 and x86
as well?
> +static inline bool is_vm_area_hugepages(const void *addr)
> +{
> + /*
> + * This may not 100% tell if the area is mapped with > PAGE_SIZE
> + * page table entries, if for some reason the architecture indicates
> + * larger sizes are available but decides not to use them, nothing
> + * prevents that. This only indicates the size of the physical page
> + * allocated in the vmalloc layer.
> + */
> + return (find_vm_area(addr)->page_order > 0);
No need for the braces here.
> }
>
> +static int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
> + pgprot_t prot, struct page **pages, unsigned int page_shift)
> +{
> + unsigned int i, nr = (end - addr) >> PAGE_SHIFT;
> +
> + WARN_ON(page_shift < PAGE_SHIFT);
> +
> + if (page_shift == PAGE_SHIFT)
> + return vmap_small_pages_range_noflush(addr, end, prot, pages);
This begs for a IS_ENABLED check to disable the hugepage code for
architectures that don't need it.
> +int map_kernel_range_noflush(unsigned long addr, unsigned long size,
> + pgprot_t prot, struct page **pages)
> +{
> + return vmap_pages_range_noflush(addr, addr + size, prot, pages, PAGE_SHIFT);
> +}
Please just kill off map_kernel_range_noflush and map_kernel_range
off entirely in favor of the vmap versions.
> + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) {
Maybe using a helper that takes the vm_area_struct and either returns
area->page_order or always 0 based on IS_ENABLED?
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-24 15:07 ` Christoph Hellwig
@ 2021-01-24 18:06 ` Randy Dunlap
2021-01-24 23:17 ` Nicholas Piggin
1 sibling, 0 replies; 34+ messages in thread
From: Randy Dunlap @ 2021-01-24 18:06 UTC (permalink / raw)
To: Christoph Hellwig, Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Christophe Leroy, Rick Edgecombe,
Ding Tianhong
On 1/24/21 7:07 AM, Christoph Hellwig wrote:
>> +config HAVE_ARCH_HUGE_VMALLOC
>> + depends on HAVE_ARCH_HUGE_VMAP
>> + bool
>> + help
>> + Archs that select this would be capable of PMD-sized vmaps (i.e.,
>> + arch_vmap_pmd_supported() returns true), and they must make no
>> + assumptions that vmalloc memory is mapped with PAGE_SIZE ptes. The
>> + VM_NOHUGE flag can be used to prohibit arch-specific allocations from
>> + using hugepages to help with this (e.g., modules may require it).
> help texts don't make sense for options that aren't user visible.
It's good that the Kconfig symbol is documented and it's better here
than having to dig thru git commit logs IMO.
It could be done as "# Arhcs that select" style comments instead
of Kconfig help text.
--
~Randy
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-24 15:07 ` Christoph Hellwig
2021-01-24 18:06 ` Randy Dunlap
@ 2021-01-24 23:17 ` Nicholas Piggin
1 sibling, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-24 23:17 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Andrew Morton, Christophe Leroy, Ding Tianhong, Jonathan Cameron,
linux-arch, linux-kernel, linux-mm, linuxppc-dev, Zefan Li,
Rick Edgecombe, Randy Dunlap
Excerpts from Christoph Hellwig's message of January 25, 2021 1:07 am:
> On Sun, Jan 24, 2021 at 06:22:29PM +1000, Nicholas Piggin wrote:
>> diff --git a/arch/Kconfig b/arch/Kconfig
>> index 24862d15f3a3..f87feb616184 100644
>> --- a/arch/Kconfig
>> +++ b/arch/Kconfig
>> @@ -724,6 +724,16 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>> config HAVE_ARCH_HUGE_VMAP
>> bool
>>
>> +config HAVE_ARCH_HUGE_VMALLOC
>> + depends on HAVE_ARCH_HUGE_VMAP
>> + bool
>> + help
>> + Archs that select this would be capable of PMD-sized vmaps (i.e.,
>> + arch_vmap_pmd_supported() returns true), and they must make no
>> + assumptions that vmalloc memory is mapped with PAGE_SIZE ptes. The
>> + VM_NOHUGE flag can be used to prohibit arch-specific allocations from
>> + using hugepages to help with this (e.g., modules may require it).
>
> help texts don't make sense for options that aren't user visible.
Yeah it was supposed to just be a comment but if it was user visible
then similar kind of thing would not make sense in help text, so I'll
just turn it into a real comment as per Randy's suggestion.
> More importantly, is there any good reason to keep the option and not
> just go the extra step and enable huge page vmalloc for arm64 and x86
> as well?
Yes they need to ensure they exclude vmallocs that can't be huge one
way or another (VM_ flag or prot arg).
After they're converted we can fold it into HUGE_VMAP.
>> +static inline bool is_vm_area_hugepages(const void *addr)
>> +{
>> + /*
>> + * This may not 100% tell if the area is mapped with > PAGE_SIZE
>> + * page table entries, if for some reason the architecture indicates
>> + * larger sizes are available but decides not to use them, nothing
>> + * prevents that. This only indicates the size of the physical page
>> + * allocated in the vmalloc layer.
>> + */
>> + return (find_vm_area(addr)->page_order > 0);
>
> No need for the braces here.
>
>> }
>>
>> +static int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
>> + pgprot_t prot, struct page **pages, unsigned int page_shift)
>> +{
>> + unsigned int i, nr = (end - addr) >> PAGE_SHIFT;
>> +
>> + WARN_ON(page_shift < PAGE_SHIFT);
>> +
>> + if (page_shift == PAGE_SHIFT)
>> + return vmap_small_pages_range_noflush(addr, end, prot, pages);
>
> This begs for a IS_ENABLED check to disable the hugepage code for
> architectures that don't need it.
Yeah good point.
>> +int map_kernel_range_noflush(unsigned long addr, unsigned long size,
>> + pgprot_t prot, struct page **pages)
>> +{
>> + return vmap_pages_range_noflush(addr, addr + size, prot, pages, PAGE_SHIFT);
>> +}
>
> Please just kill off map_kernel_range_noflush and map_kernel_range
> off entirely in favor of the vmap versions.
I can do a cleanup patch on top of it.
>> + for (i = 0; i < area->nr_pages; i += 1U << area->page_order) {
>
> Maybe using a helper that takes the vm_area_struct and either returns
> area->page_order or always 0 based on IS_ENABLED?
I'll see how it looks.
Thanks,
Nick
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup
2021-01-24 11:40 ` Christoph Hellwig
2021-01-24 12:22 ` Nicholas Piggin
@ 2021-01-25 8:19 ` Christophe Leroy
1 sibling, 0 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-01-25 8:19 UTC (permalink / raw)
To: Christoph Hellwig, Nicholas Piggin
Cc: linux-mm, Andrew Morton, linux-kernel, linux-arch, linuxppc-dev,
Zefan Li, Jonathan Cameron, Rick Edgecombe, Ding Tianhong,
Catalin Marinas, Will Deacon, linux-arm-kernel, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, x86, H. Peter Anvin
Le 24/01/2021 à 12:40, Christoph Hellwig a écrit :
>> diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
>> index 2ca708ab9b20..597b40405319 100644
>> --- a/arch/arm64/include/asm/vmalloc.h
>> +++ b/arch/arm64/include/asm/vmalloc.h
>> @@ -1,4 +1,12 @@
>> #ifndef _ASM_ARM64_VMALLOC_H
>> #define _ASM_ARM64_VMALLOC_H
>>
>> +#include <asm/page.h>
>> +
>> +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
>> +bool arch_vmap_p4d_supported(pgprot_t prot);
>> +bool arch_vmap_pud_supported(pgprot_t prot);
>> +bool arch_vmap_pmd_supported(pgprot_t prot);
>> +#endif
>
> Shouldn't the be inlines or macros? Also it would be useful
> if the architectures would not have to override all functions
> but just those that are it actually implements?
>
> Also lots of > 80 char lines in the patch.
>
Since https://github.com/linuxppc/linux/commit/bdc48fa11e46f867ea4d75fa59ee87a7f48be144
this 80 char limit is not strongly enforced anymore.
Allthough 80 is still the prefered limit, code is often more readable with a slightly longer single
line that with lines splited.
Christophe
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup
2021-01-24 8:22 ` [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup Nicholas Piggin
2021-01-24 11:40 ` Christoph Hellwig
@ 2021-01-25 8:40 ` Christophe Leroy
1 sibling, 0 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-01-25 8:40 UTC (permalink / raw)
To: Nicholas Piggin, linux-mm, Andrew Morton
Cc: linux-kernel, linux-arch, linuxppc-dev, Zefan Li,
Jonathan Cameron, Christoph Hellwig, Rick Edgecombe,
Ding Tianhong, Catalin Marinas, Will Deacon, linux-arm-kernel,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, x86,
H. Peter Anvin
Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
> This changes the awkward approach where architectures provide init
> functions to determine which levels they can provide large mappings for,
> to one where the arch is queried for each call.
>
> This removes code and indirection, and allows constant-folding of dead
> code for unsupported levels.
It looks like this is only the case when CONFIG_HAVE_ARCH_HUGE_VMAP is not defined.
When it is defined, for exemple on powerpc you defined arch_vmap_p4d_supported() as a regular
function in arch/powerpc/mm/book3s64/radix_pgtable.c, so allthough it returns always false, it won't
constant fold dead code.
>
> This also adds a prot argument to the arch query. This is unused
> currently but could help with some architectures (e.g., some powerpc
> processors can't map uncacheable memory with large pages).
>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: x86@kernel.org
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/arm64/include/asm/vmalloc.h | 8 +++
> arch/arm64/mm/mmu.c | 10 +--
> arch/powerpc/include/asm/vmalloc.h | 8 +++
> arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +--
> arch/x86/include/asm/vmalloc.h | 7 ++
> arch/x86/mm/ioremap.c | 12 ++--
> include/linux/io.h | 9 ---
> include/linux/vmalloc.h | 6 ++
> init/main.c | 1 -
> mm/ioremap.c | 88 +++++++++---------------
> 10 files changed, 79 insertions(+), 78 deletions(-)
>
Christophe
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 06/12] powerpc: inline huge vmap supported functions
2021-01-24 8:22 ` [PATCH v10 06/12] powerpc: inline huge vmap supported functions Nicholas Piggin
@ 2021-01-25 8:42 ` Christophe Leroy
2021-01-25 11:37 ` Nicholas Piggin
0 siblings, 1 reply; 34+ messages in thread
From: Christophe Leroy @ 2021-01-25 8:42 UTC (permalink / raw)
To: Nicholas Piggin, linux-mm, Andrew Morton
Cc: linux-kernel, linux-arch, linuxppc-dev, Zefan Li,
Jonathan Cameron, Christoph Hellwig, Rick Edgecombe,
Ding Tianhong, Michael Ellerman
Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
> This allows unsupported levels to be constant folded away, and so
> p4d_free_pud_page can be removed because it's no longer linked to.
Ah, ok, you did it here. Why not squashing this patch into patch 5 directly ?
>
> Cc: linuxppc-dev@lists.ozlabs.org
> Acked-by: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/powerpc/include/asm/vmalloc.h | 19 ++++++++++++++++---
> arch/powerpc/mm/book3s64/radix_pgtable.c | 21 ---------------------
> 2 files changed, 16 insertions(+), 24 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/vmalloc.h b/arch/powerpc/include/asm/vmalloc.h
> index 105abb73f075..3f0c153befb0 100644
> --- a/arch/powerpc/include/asm/vmalloc.h
> +++ b/arch/powerpc/include/asm/vmalloc.h
> @@ -1,12 +1,25 @@
> #ifndef _ASM_POWERPC_VMALLOC_H
> #define _ASM_POWERPC_VMALLOC_H
>
> +#include <asm/mmu.h>
> #include <asm/page.h>
>
> #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
> -bool arch_vmap_p4d_supported(pgprot_t prot);
> -bool arch_vmap_pud_supported(pgprot_t prot);
> -bool arch_vmap_pmd_supported(pgprot_t prot);
> +static inline bool arch_vmap_p4d_supported(pgprot_t prot)
> +{
> + return false;
> +}
> +
> +static inline bool arch_vmap_pud_supported(pgprot_t prot)
> +{
> + /* HPT does not cope with large pages in the vmalloc area */
> + return radix_enabled();
> +}
> +
> +static inline bool arch_vmap_pmd_supported(pgprot_t prot)
> +{
> + return radix_enabled();
> +}
> #endif
>
> #endif /* _ASM_POWERPC_VMALLOC_H */
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 743807fc210f..8da62afccee5 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -1082,22 +1082,6 @@ void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
> set_pte_at(mm, addr, ptep, pte);
> }
>
> -bool arch_vmap_pud_supported(pgprot_t prot)
> -{
> - /* HPT does not cope with large pages in the vmalloc area */
> - return radix_enabled();
> -}
> -
> -bool arch_vmap_pmd_supported(pgprot_t prot)
> -{
> - return radix_enabled();
> -}
> -
> -int p4d_free_pud_page(p4d_t *p4d, unsigned long addr)
> -{
> - return 0;
> -}
> -
> int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
> {
> pte_t *ptep = (pte_t *)pud;
> @@ -1181,8 +1165,3 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
>
> return 1;
> }
> -
> -bool arch_vmap_p4d_supported(pgprot_t prot)
> -{
> - return false;
> -}
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-24 8:22 ` [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings Nicholas Piggin
2021-01-24 15:07 ` Christoph Hellwig
@ 2021-01-25 9:14 ` Christophe Leroy
2021-01-25 11:37 ` Nicholas Piggin
2021-01-25 12:24 ` David Laight
1 sibling, 2 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-01-25 9:14 UTC (permalink / raw)
To: Nicholas Piggin, linux-mm, Andrew Morton
Cc: linux-kernel, linux-arch, linuxppc-dev, Zefan Li,
Jonathan Cameron, Christoph Hellwig, Rick Edgecombe,
Ding Tianhong
Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
> Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
> enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
> supports PMD sized vmap mappings.
>
> vmalloc will attempt to allocate PMD-sized pages if allocating PMD size
> or larger, and fall back to small pages if that was unsuccessful.
>
> Architectures must ensure that any arch specific vmalloc allocations
> that require PAGE_SIZE mappings (e.g., module allocations vs strict
> module rwx) use the VM_NOHUGE flag to inhibit larger mappings.
>
> When hugepage vmalloc mappings are enabled in the next patch, this
> reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node
> POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%.
>
> This can result in more internal fragmentation and memory overhead for a
> given allocation, an option nohugevmalloc is added to disable at boot.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> arch/Kconfig | 10 +++
> include/linux/vmalloc.h | 18 ++++
> mm/page_alloc.c | 5 +-
> mm/vmalloc.c | 192 ++++++++++++++++++++++++++++++----------
> 4 files changed, 177 insertions(+), 48 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 0377e1d059e5..eef61e0f5170 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2691,15 +2746,18 @@ EXPORT_SYMBOL_GPL(vmap_pfn);
> #endif /* CONFIG_VMAP_PFN */
>
> static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
> - pgprot_t prot, int node)
> + pgprot_t prot, unsigned int page_shift,
> + int node)
> {
> const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
> - unsigned int nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
> - unsigned long array_size;
> - unsigned int i;
> + unsigned int page_order = page_shift - PAGE_SHIFT;
> + unsigned long addr = (unsigned long)area->addr;
> + unsigned long size = get_vm_area_size(area);
> + unsigned int nr_small_pages = size >> PAGE_SHIFT;
> struct page **pages;
> + unsigned int i;
>
> - array_size = (unsigned long)nr_pages * sizeof(struct page *);
> + array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
array_size() is a function in include/linux/overflow.h
For some reason, it breaks the build with your series.
> gfp_mask |= __GFP_NOWARN;
> if (!(gfp_mask & (GFP_DMA | GFP_DMA32)))
> gfp_mask |= __GFP_HIGHMEM;
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 06/12] powerpc: inline huge vmap supported functions
2021-01-25 8:42 ` Christophe Leroy
@ 2021-01-25 11:37 ` Nicholas Piggin
0 siblings, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-25 11:37 UTC (permalink / raw)
To: Andrew Morton, Christophe Leroy, linux-mm
Cc: Ding Tianhong, Christoph Hellwig, Jonathan Cameron, linux-arch,
linux-kernel, linuxppc-dev, Zefan Li, Michael Ellerman,
Rick Edgecombe
Excerpts from Christophe Leroy's message of January 25, 2021 6:42 pm:
>
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> This allows unsupported levels to be constant folded away, and so
>> p4d_free_pud_page can be removed because it's no longer linked to.
>
> Ah, ok, you did it here. Why not squashing this patch into patch 5 directly ?
To reduce arch code movement in the first patch and split up these arch
patches to get separate acks for them.
Maybe overkill for these changes but doesn't hurt I think.
Thanks,
Nick
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-25 9:14 ` Christophe Leroy
@ 2021-01-25 11:37 ` Nicholas Piggin
2021-01-25 12:13 ` Christophe Leroy
2021-01-25 12:24 ` David Laight
1 sibling, 1 reply; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-25 11:37 UTC (permalink / raw)
To: Andrew Morton, Christophe Leroy, linux-mm
Cc: Ding Tianhong, Christoph Hellwig, Jonathan Cameron, linux-arch,
linux-kernel, linuxppc-dev, Zefan Li, Rick Edgecombe
Excerpts from Christophe Leroy's message of January 25, 2021 7:14 pm:
>
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
>> enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
>> supports PMD sized vmap mappings.
>>
>> vmalloc will attempt to allocate PMD-sized pages if allocating PMD size
>> or larger, and fall back to small pages if that was unsuccessful.
>>
>> Architectures must ensure that any arch specific vmalloc allocations
>> that require PAGE_SIZE mappings (e.g., module allocations vs strict
>> module rwx) use the VM_NOHUGE flag to inhibit larger mappings.
>>
>> When hugepage vmalloc mappings are enabled in the next patch, this
>> reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node
>> POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%.
>>
>> This can result in more internal fragmentation and memory overhead for a
>> given allocation, an option nohugevmalloc is added to disable at boot.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> arch/Kconfig | 10 +++
>> include/linux/vmalloc.h | 18 ++++
>> mm/page_alloc.c | 5 +-
>> mm/vmalloc.c | 192 ++++++++++++++++++++++++++++++----------
>> 4 files changed, 177 insertions(+), 48 deletions(-)
>>
>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index 0377e1d059e5..eef61e0f5170 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>
>> @@ -2691,15 +2746,18 @@ EXPORT_SYMBOL_GPL(vmap_pfn);
>> #endif /* CONFIG_VMAP_PFN */
>>
>> static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>> - pgprot_t prot, int node)
>> + pgprot_t prot, unsigned int page_shift,
>> + int node)
>> {
>> const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
>> - unsigned int nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
>> - unsigned long array_size;
>> - unsigned int i;
>> + unsigned int page_order = page_shift - PAGE_SHIFT;
>> + unsigned long addr = (unsigned long)area->addr;
>> + unsigned long size = get_vm_area_size(area);
>> + unsigned int nr_small_pages = size >> PAGE_SHIFT;
>> struct page **pages;
>> + unsigned int i;
>>
>> - array_size = (unsigned long)nr_pages * sizeof(struct page *);
>> + array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
>
> array_size() is a function in include/linux/overflow.h
>
> For some reason, it breaks the build with your series.
What config? I haven't seen it.
Thanks,
Nick
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-25 11:37 ` Nicholas Piggin
@ 2021-01-25 12:13 ` Christophe Leroy
0 siblings, 0 replies; 34+ messages in thread
From: Christophe Leroy @ 2021-01-25 12:13 UTC (permalink / raw)
To: Nicholas Piggin, Andrew Morton, linux-mm
Cc: Ding Tianhong, Christoph Hellwig, Jonathan Cameron, linux-arch,
linux-kernel, linuxppc-dev, Zefan Li, Rick Edgecombe
Le 25/01/2021 à 12:37, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of January 25, 2021 7:14 pm:
>>
>>
>> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>>> Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
>>> enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
>>> supports PMD sized vmap mappings.
>>>
>>> vmalloc will attempt to allocate PMD-sized pages if allocating PMD size
>>> or larger, and fall back to small pages if that was unsuccessful.
>>>
>>> Architectures must ensure that any arch specific vmalloc allocations
>>> that require PAGE_SIZE mappings (e.g., module allocations vs strict
>>> module rwx) use the VM_NOHUGE flag to inhibit larger mappings.
>>>
>>> When hugepage vmalloc mappings are enabled in the next patch, this
>>> reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node
>>> POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%.
>>>
>>> This can result in more internal fragmentation and memory overhead for a
>>> given allocation, an option nohugevmalloc is added to disable at boot.
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>> arch/Kconfig | 10 +++
>>> include/linux/vmalloc.h | 18 ++++
>>> mm/page_alloc.c | 5 +-
>>> mm/vmalloc.c | 192 ++++++++++++++++++++++++++++++----------
>>> 4 files changed, 177 insertions(+), 48 deletions(-)
>>>
>>
>>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>>> index 0377e1d059e5..eef61e0f5170 100644
>>> --- a/mm/vmalloc.c
>>> +++ b/mm/vmalloc.c
>>
>>> @@ -2691,15 +2746,18 @@ EXPORT_SYMBOL_GPL(vmap_pfn);
>>> #endif /* CONFIG_VMAP_PFN */
>>>
>>> static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>>> - pgprot_t prot, int node)
>>> + pgprot_t prot, unsigned int page_shift,
>>> + int node)
>>> {
>>> const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
>>> - unsigned int nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
>>> - unsigned long array_size;
>>> - unsigned int i;
>>> + unsigned int page_order = page_shift - PAGE_SHIFT;
>>> + unsigned long addr = (unsigned long)area->addr;
>>> + unsigned long size = get_vm_area_size(area);
>>> + unsigned int nr_small_pages = size >> PAGE_SHIFT;
>>> struct page **pages;
>>> + unsigned int i;
>>>
>>> - array_size = (unsigned long)nr_pages * sizeof(struct page *);
>>> + array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
>>
>> array_size() is a function in include/linux/overflow.h
>>
>> For some reason, it breaks the build with your series.
>
> What config? I haven't seen it.
>
Several configs I believe. I saw it this morning in
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/20210124082230.2118861-13-npiggin@gmail.com/
Though the reports have all disappeared now.
^ permalink raw reply [flat|nested] 34+ messages in thread
* RE: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-25 9:14 ` Christophe Leroy
2021-01-25 11:37 ` Nicholas Piggin
@ 2021-01-25 12:24 ` David Laight
2021-01-26 9:50 ` Nicholas Piggin
1 sibling, 1 reply; 34+ messages in thread
From: David Laight @ 2021-01-25 12:24 UTC (permalink / raw)
To: 'Christophe Leroy', Nicholas Piggin, linux-mm, Andrew Morton
Cc: linux-arch, Ding Tianhong, linux-kernel, Christoph Hellwig,
Zefan Li, Jonathan Cameron, Rick Edgecombe, linuxppc-dev
From: Christophe Leroy
> Sent: 25 January 2021 09:15
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
> > Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
> > enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
> > supports PMD sized vmap mappings.
> >
> > vmalloc will attempt to allocate PMD-sized pages if allocating PMD size
> > or larger, and fall back to small pages if that was unsuccessful.
> >
> > Architectures must ensure that any arch specific vmalloc allocations
> > that require PAGE_SIZE mappings (e.g., module allocations vs strict
> > module rwx) use the VM_NOHUGE flag to inhibit larger mappings.
> >
> > When hugepage vmalloc mappings are enabled in the next patch, this
> > reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node
> > POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%.
> >
> > This can result in more internal fragmentation and memory overhead for a
> > given allocation, an option nohugevmalloc is added to disable at boot.
> >
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
> > arch/Kconfig | 10 +++
> > include/linux/vmalloc.h | 18 ++++
> > mm/page_alloc.c | 5 +-
> > mm/vmalloc.c | 192 ++++++++++++++++++++++++++++++----------
> > 4 files changed, 177 insertions(+), 48 deletions(-)
> >
>
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index 0377e1d059e5..eef61e0f5170 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
>
> > @@ -2691,15 +2746,18 @@ EXPORT_SYMBOL_GPL(vmap_pfn);
> > #endif /* CONFIG_VMAP_PFN */
> >
> > static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
> > - pgprot_t prot, int node)
> > + pgprot_t prot, unsigned int page_shift,
> > + int node)
> > {
> > const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
> > - unsigned int nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
> > - unsigned long array_size;
> > - unsigned int i;
> > + unsigned int page_order = page_shift - PAGE_SHIFT;
> > + unsigned long addr = (unsigned long)area->addr;
> > + unsigned long size = get_vm_area_size(area);
> > + unsigned int nr_small_pages = size >> PAGE_SHIFT;
> > struct page **pages;
> > + unsigned int i;
> >
> > - array_size = (unsigned long)nr_pages * sizeof(struct page *);
> > + array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
>
> array_size() is a function in include/linux/overflow.h
>
> For some reason, it breaks the build with your series.
I can't see the replacement definition for array_size.
The old local variable is deleted.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
^ permalink raw reply [flat|nested] 34+ messages in thread
* RE: [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings
2021-01-25 12:24 ` David Laight
@ 2021-01-26 9:50 ` Nicholas Piggin
0 siblings, 0 replies; 34+ messages in thread
From: Nicholas Piggin @ 2021-01-26 9:50 UTC (permalink / raw)
To: Andrew Morton, 'Christophe Leroy', David Laight, linux-mm
Cc: Ding Tianhong, Christoph Hellwig, Jonathan Cameron, linux-arch,
linux-kernel, linuxppc-dev,
Zefan
Li, Rick Edgecombe
Excerpts from David Laight's message of January 25, 2021 10:24 pm:
> From: Christophe Leroy
>> Sent: 25 January 2021 09:15
>>
>> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> > Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
>> > enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
>> > supports PMD sized vmap mappings.
>> >
>> > vmalloc will attempt to allocate PMD-sized pages if allocating PMD size
>> > or larger, and fall back to small pages if that was unsuccessful.
>> >
>> > Architectures must ensure that any arch specific vmalloc allocations
>> > that require PAGE_SIZE mappings (e.g., module allocations vs strict
>> > module rwx) use the VM_NOHUGE flag to inhibit larger mappings.
>> >
>> > When hugepage vmalloc mappings are enabled in the next patch, this
>> > reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node
>> > POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%.
>> >
>> > This can result in more internal fragmentation and memory overhead for a
>> > given allocation, an option nohugevmalloc is added to disable at boot.
>> >
>> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> > ---
>> > arch/Kconfig | 10 +++
>> > include/linux/vmalloc.h | 18 ++++
>> > mm/page_alloc.c | 5 +-
>> > mm/vmalloc.c | 192 ++++++++++++++++++++++++++++++----------
>> > 4 files changed, 177 insertions(+), 48 deletions(-)
>> >
>>
>> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> > index 0377e1d059e5..eef61e0f5170 100644
>> > --- a/mm/vmalloc.c
>> > +++ b/mm/vmalloc.c
>>
>> > @@ -2691,15 +2746,18 @@ EXPORT_SYMBOL_GPL(vmap_pfn);
>> > #endif /* CONFIG_VMAP_PFN */
>> >
>> > static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
>> > - pgprot_t prot, int node)
>> > + pgprot_t prot, unsigned int page_shift,
>> > + int node)
>> > {
>> > const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
>> > - unsigned int nr_pages = get_vm_area_size(area) >> PAGE_SHIFT;
>> > - unsigned long array_size;
>> > - unsigned int i;
>> > + unsigned int page_order = page_shift - PAGE_SHIFT;
>> > + unsigned long addr = (unsigned long)area->addr;
>> > + unsigned long size = get_vm_area_size(area);
>> > + unsigned int nr_small_pages = size >> PAGE_SHIFT;
>> > struct page **pages;
>> > + unsigned int i;
>> >
>> > - array_size = (unsigned long)nr_pages * sizeof(struct page *);
>> > + array_size = (unsigned long)nr_small_pages * sizeof(struct page *);
>>
>> array_size() is a function in include/linux/overflow.h
>>
>> For some reason, it breaks the build with your series.
>
> I can't see the replacement definition for array_size.
> The old local variable is deleted.
Yeah I saw that after taking another look. Must have sent in a bad diff.
The v11 fixed that and a couple of other compile issues.
Thanks,
Nick
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2021-01-26 9:50 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-24 8:22 [PATCH v10 00/12] huge vmalloc mappings Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 01/12] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings Nicholas Piggin
2021-01-24 11:31 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 02/12] mm: apply_to_pte_range warn and fail if a large pte is encountered Nicholas Piggin
2021-01-24 11:32 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 03/12] mm/vmalloc: rename vmap_*_range vmap_pages_*_range Nicholas Piggin
2021-01-24 11:34 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 04/12] mm/ioremap: rename ioremap_*_range to vmap_*_range Nicholas Piggin
2021-01-24 11:36 ` Christoph Hellwig
2021-01-24 12:04 ` Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 05/12] mm: HUGE_VMAP arch support cleanup Nicholas Piggin
2021-01-24 11:40 ` Christoph Hellwig
2021-01-24 12:22 ` Nicholas Piggin
2021-01-25 8:19 ` Christophe Leroy
2021-01-25 8:40 ` Christophe Leroy
2021-01-24 8:22 ` [PATCH v10 06/12] powerpc: inline huge vmap supported functions Nicholas Piggin
2021-01-25 8:42 ` Christophe Leroy
2021-01-25 11:37 ` Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 07/12] arm64: " Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 08/12] x86: " Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 09/12] mm: Move vmap_range from mm/ioremap.c to mm/vmalloc.c Nicholas Piggin
2021-01-24 14:49 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 10/12] mm/vmalloc: add vmap_range_noflush variant Nicholas Piggin
2021-01-24 14:51 ` Christoph Hellwig
2021-01-24 8:22 ` [PATCH v10 11/12] mm/vmalloc: Hugepage vmalloc mappings Nicholas Piggin
2021-01-24 15:07 ` Christoph Hellwig
2021-01-24 18:06 ` Randy Dunlap
2021-01-24 23:17 ` Nicholas Piggin
2021-01-25 9:14 ` Christophe Leroy
2021-01-25 11:37 ` Nicholas Piggin
2021-01-25 12:13 ` Christophe Leroy
2021-01-25 12:24 ` David Laight
2021-01-26 9:50 ` Nicholas Piggin
2021-01-24 8:22 ` [PATCH v10 12/12] powerpc/64s/radix: Enable huge " Nicholas Piggin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).