* [PATCH 0/2] x86: Retry to remove vmalloc/ioremap synchronzation
@ 2020-08-14 15:19 Joerg Roedel
2020-08-14 15:19 ` [PATCH 1/2] x86/mm/64: Do not sync vmalloc/ioremap mappings Joerg Roedel
2020-08-14 15:19 ` [PATCH 2/2] x86/mm/64: Update comment in preallocate_vmalloc_pages() Joerg Roedel
0 siblings, 2 replies; 5+ messages in thread
From: Joerg Roedel @ 2020-08-14 15:19 UTC (permalink / raw)
To: x86
Cc: Ingo Molnar, Mike Rapoport, Dave Hansen, Andy Lutomirski,
Peter Zijlstra, hpa, Linus Torvalds, Jason, Andrew Morton,
linux-kernel, kirill.shutemov, Joerg Roedel
From: Joerg Roedel <jroedel@suse.de>
Hi,
as discussed here are the updates to the recent patches and fixes to
pre-allocate the vmalloc/ioremap second-level page-table pages on
x86-64.
Patch one is a re-send of
commit 8bb9bf242d1f ("x86/mm/64: Do not sync vmalloc/ioremap mappings")
with more explanations about what broke, what fixed it and why its now
safe to apply it again.
Patch two updates the comment in preallocate_vmalloc_pages(), it is
mostly the wording from Dave Hansen, so he really deserved the
authorship of it. I just didn't want to commit/send it in his name.
Feel free to change authorship of this patch to him.
Regards,
Joerg
Joerg Roedel (2):
x86/mm/64: Do not sync vmalloc/ioremap mappings
x86/mm/64: Update comment in preallocate_vmalloc_pages()
arch/x86/include/asm/pgtable_64_types.h | 2 --
arch/x86/mm/init_64.c | 20 ++++++++++----------
2 files changed, 10 insertions(+), 12 deletions(-)
--
2.28.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/2] x86/mm/64: Do not sync vmalloc/ioremap mappings
2020-08-14 15:19 [PATCH 0/2] x86: Retry to remove vmalloc/ioremap synchronzation Joerg Roedel
@ 2020-08-14 15:19 ` Joerg Roedel
2020-08-15 15:46 ` [tip: x86/mm] " tip-bot2 for Joerg Roedel
2020-08-14 15:19 ` [PATCH 2/2] x86/mm/64: Update comment in preallocate_vmalloc_pages() Joerg Roedel
1 sibling, 1 reply; 5+ messages in thread
From: Joerg Roedel @ 2020-08-14 15:19 UTC (permalink / raw)
To: x86
Cc: Ingo Molnar, Mike Rapoport, Dave Hansen, Andy Lutomirski,
Peter Zijlstra, hpa, Linus Torvalds, Jason, Andrew Morton,
linux-kernel, kirill.shutemov, Joerg Roedel
From: Joerg Roedel <jroedel@suse.de>
Remove the code to sync the vmalloc and ioremap ranges for x86-64. The
page-table pages are all pre-allocated so that synchronization is
no longer necessary.
This is a patch that already went into the kernel as:
commit 8bb9bf242d1f ("x86/mm/64: Do not sync vmalloc/ioremap mappings")
But it had to be reverted later because it unveiled a bug from:
commit 6eb82f994026 ("x86/mm: Pre-allocate P4D/PUD pages for vmalloc area")
The bug in that commit causes the P4D/PUD pages not to be correctly
allocated, making the synchronization still necessary. That issue got
fixed meanwhile upstream:
commit 995909a4e22b ("x86/mm/64: Do not dereference non-present PGD entries")
With that fix it is safe again to remove the page-table synchronization
for vmalloc/ioremap ranges on x86-64.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
arch/x86/include/asm/pgtable_64_types.h | 2 --
arch/x86/mm/init_64.c | 5 -----
2 files changed, 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 8f63efb2a2cc..52e5f5f2240d 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -159,6 +159,4 @@ extern unsigned int ptrs_per_p4d;
#define PGD_KERNEL_START ((PAGE_SIZE / 2) / sizeof(pgd_t))
-#define ARCH_PAGE_TABLE_SYNC_MASK (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
-
#endif /* _ASM_X86_PGTABLE_64_DEFS_H */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index a4ac13cc3fdc..777d83546764 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -217,11 +217,6 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
sync_global_pgds_l4(start, end);
}
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
-{
- sync_global_pgds(start, end);
-}
-
/*
* NOTE: This function is marked __ref because it calls __init function
* (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
--
2.28.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] x86/mm/64: Update comment in preallocate_vmalloc_pages()
2020-08-14 15:19 [PATCH 0/2] x86: Retry to remove vmalloc/ioremap synchronzation Joerg Roedel
2020-08-14 15:19 ` [PATCH 1/2] x86/mm/64: Do not sync vmalloc/ioremap mappings Joerg Roedel
@ 2020-08-14 15:19 ` Joerg Roedel
2020-08-15 15:46 ` [tip: x86/mm] " tip-bot2 for Joerg Roedel
1 sibling, 1 reply; 5+ messages in thread
From: Joerg Roedel @ 2020-08-14 15:19 UTC (permalink / raw)
To: x86
Cc: Ingo Molnar, Mike Rapoport, Dave Hansen, Andy Lutomirski,
Peter Zijlstra, hpa, Linus Torvalds, Jason, Andrew Morton,
linux-kernel, kirill.shutemov, Joerg Roedel
From: Joerg Roedel <jroedel@suse.de>
The comment explaining why 4-level systems only need to allocate on
the P4D level caused some confustion. Update it to better explain why
on 4-level systems the allocation on PUD level is necessary.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
arch/x86/mm/init_64.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 777d83546764..124e63795ac9 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1252,14 +1252,19 @@ static void __init preallocate_vmalloc_pages(void)
if (!p4d)
goto failed;
- /*
- * With 5-level paging the P4D level is not folded. So the PGDs
- * are now populated and there is no need to walk down to the
- * PUD level.
- */
if (pgtable_l5_enabled())
continue;
+ /*
+ * The goal here is to allocate all possibly required
+ * hardware page tables pointed to by the top hardware
+ * level.
+ *
+ * On 4-level systems, the p4d layer is folded away and
+ * the above code does no preallocation. Below, go down
+ * to the pud _software_ level to ensure the second
+ * hardware level is allocated on 4-level systems too.
+ */
lvl = "pud";
pud = pud_alloc(&init_mm, p4d, addr);
if (!pud)
--
2.28.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [tip: x86/mm] x86/mm/64: Update comment in preallocate_vmalloc_pages()
2020-08-14 15:19 ` [PATCH 2/2] x86/mm/64: Update comment in preallocate_vmalloc_pages() Joerg Roedel
@ 2020-08-15 15:46 ` tip-bot2 for Joerg Roedel
0 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Joerg Roedel @ 2020-08-15 15:46 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Joerg Roedel, Ingo Molnar, x86, LKML
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: 7a27ef5e83089090f3a4073a9157c862ef00acfc
Gitweb: https://git.kernel.org/tip/7a27ef5e83089090f3a4073a9157c862ef00acfc
Author: Joerg Roedel <jroedel@suse.de>
AuthorDate: Fri, 14 Aug 2020 17:19:47 +02:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Sat, 15 Aug 2020 13:56:16 +02:00
x86/mm/64: Update comment in preallocate_vmalloc_pages()
The comment explaining why 4-level systems only need to allocate on
the P4D level caused some confustion. Update it to better explain why
on 4-level systems the allocation on PUD level is necessary.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200814151947.26229-3-joro@8bytes.org
---
arch/x86/mm/init_64.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 777d835..b5a3fa4 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1252,14 +1252,19 @@ static void __init preallocate_vmalloc_pages(void)
if (!p4d)
goto failed;
- /*
- * With 5-level paging the P4D level is not folded. So the PGDs
- * are now populated and there is no need to walk down to the
- * PUD level.
- */
if (pgtable_l5_enabled())
continue;
+ /*
+ * The goal here is to allocate all possibly required
+ * hardware page tables pointed to by the top hardware
+ * level.
+ *
+ * On 4-level systems, the P4D layer is folded away and
+ * the above code does no preallocation. Below, go down
+ * to the pud _software_ level to ensure the second
+ * hardware level is allocated on 4-level systems too.
+ */
lvl = "pud";
pud = pud_alloc(&init_mm, p4d, addr);
if (!pud)
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [tip: x86/mm] x86/mm/64: Do not sync vmalloc/ioremap mappings
2020-08-14 15:19 ` [PATCH 1/2] x86/mm/64: Do not sync vmalloc/ioremap mappings Joerg Roedel
@ 2020-08-15 15:46 ` tip-bot2 for Joerg Roedel
0 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Joerg Roedel @ 2020-08-15 15:46 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Joerg Roedel, Ingo Molnar, x86, LKML
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: 58a18fe95e83b8396605154db04d73b08063f31b
Gitweb: https://git.kernel.org/tip/58a18fe95e83b8396605154db04d73b08063f31b
Author: Joerg Roedel <jroedel@suse.de>
AuthorDate: Fri, 14 Aug 2020 17:19:46 +02:00
Committer: Ingo Molnar <mingo@kernel.org>
CommitterDate: Sat, 15 Aug 2020 13:56:16 +02:00
x86/mm/64: Do not sync vmalloc/ioremap mappings
Remove the code to sync the vmalloc and ioremap ranges for x86-64. The
page-table pages are all pre-allocated so that synchronization is
no longer necessary.
This is a patch that already went into the kernel as:
commit 8bb9bf242d1f ("x86/mm/64: Do not sync vmalloc/ioremap mappings")
But it had to be reverted later because it unveiled a bug from:
commit 6eb82f994026 ("x86/mm: Pre-allocate P4D/PUD pages for vmalloc area")
The bug in that commit causes the P4D/PUD pages not to be correctly
allocated, making the synchronization still necessary. That issue got
fixed meanwhile upstream:
commit 995909a4e22b ("x86/mm/64: Do not dereference non-present PGD entries")
With that fix it is safe again to remove the page-table synchronization
for vmalloc/ioremap ranges on x86-64.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200814151947.26229-2-joro@8bytes.org
---
arch/x86/include/asm/pgtable_64_types.h | 2 --
arch/x86/mm/init_64.c | 5 -----
2 files changed, 7 deletions(-)
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 8f63efb..52e5f5f 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -159,6 +159,4 @@ extern unsigned int ptrs_per_p4d;
#define PGD_KERNEL_START ((PAGE_SIZE / 2) / sizeof(pgd_t))
-#define ARCH_PAGE_TABLE_SYNC_MASK (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
-
#endif /* _ASM_X86_PGTABLE_64_DEFS_H */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index a4ac13c..777d835 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -217,11 +217,6 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
sync_global_pgds_l4(start, end);
}
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
-{
- sync_global_pgds(start, end);
-}
-
/*
* NOTE: This function is marked __ref because it calls __init function
* (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-08-15 21:59 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-14 15:19 [PATCH 0/2] x86: Retry to remove vmalloc/ioremap synchronzation Joerg Roedel
2020-08-14 15:19 ` [PATCH 1/2] x86/mm/64: Do not sync vmalloc/ioremap mappings Joerg Roedel
2020-08-15 15:46 ` [tip: x86/mm] " tip-bot2 for Joerg Roedel
2020-08-14 15:19 ` [PATCH 2/2] x86/mm/64: Update comment in preallocate_vmalloc_pages() Joerg Roedel
2020-08-15 15:46 ` [tip: x86/mm] " tip-bot2 for Joerg Roedel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).