* [to-be-updated] x86-vmemmap-drop-handling-of-1gb-vmemmap-ranges.patch removed from -mm tree
@ 2021-03-10 22:54 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2021-03-10 22:54 UTC (permalink / raw)
To: bp, dave.hansen, david, hpa, luto, mhocko, mingo, mm-commits,
osalvador, peterz, tglx
The patch titled
Subject: x86/vmemmap: drop handling of 1GB vmemmap ranges
has been removed from the -mm tree. Its filename was
x86-vmemmap-drop-handling-of-1gb-vmemmap-ranges.patch
This patch was dropped because an updated version will be merged
------------------------------------------------------
From: Oscar Salvador <osalvador@suse.de>
Subject: x86/vmemmap: drop handling of 1GB vmemmap ranges
We never get to allocate 1GB pages when mapping the vmemmap range. Drop
the dead code both for the aligned and unaligned cases and leave only the
direct map handling.
Link: https://lkml.kernel.org/r/20210301083230.30924-3-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Suggested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H . Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
arch/x86/mm/init_64.c | 35 +++++++----------------------------
1 file changed, 7 insertions(+), 28 deletions(-)
--- a/arch/x86/mm/init_64.c~x86-vmemmap-drop-handling-of-1gb-vmemmap-ranges
+++ a/arch/x86/mm/init_64.c
@@ -1062,7 +1062,6 @@ remove_pud_table(pud_t *pud_start, unsig
unsigned long next, pages = 0;
pmd_t *pmd_base;
pud_t *pud;
- void *page_addr;
pud = pud_start + pud_index(addr);
for (; addr < end; addr = next, pud++) {
@@ -1071,33 +1070,13 @@ remove_pud_table(pud_t *pud_start, unsig
if (!pud_present(*pud))
continue;
- if (pud_large(*pud)) {
- if (IS_ALIGNED(addr, PUD_SIZE) &&
- IS_ALIGNED(next, PUD_SIZE)) {
- if (!direct)
- free_pagetable(pud_page(*pud),
- get_order(PUD_SIZE));
-
- spin_lock(&init_mm.page_table_lock);
- pud_clear(pud);
- spin_unlock(&init_mm.page_table_lock);
- pages++;
- } else {
- /* If here, we are freeing vmemmap pages. */
- memset((void *)addr, PAGE_INUSE, next - addr);
-
- page_addr = page_address(pud_page(*pud));
- if (!memchr_inv(page_addr, PAGE_INUSE,
- PUD_SIZE)) {
- free_pagetable(pud_page(*pud),
- get_order(PUD_SIZE));
-
- spin_lock(&init_mm.page_table_lock);
- pud_clear(pud);
- spin_unlock(&init_mm.page_table_lock);
- }
- }
-
+ if (pud_large(*pud) &&
+ IS_ALIGNED(addr, PUD_SIZE) &&
+ IS_ALIGNED(next, PUD_SIZE)) {
+ spin_lock(&init_mm.page_table_lock);
+ pud_clear(pud);
+ spin_unlock(&init_mm.page_table_lock);
+ pages++;
continue;
}
_
Patches currently in -mm which might be from osalvador@suse.de are
x86-vmemmap-handle-unpopulated-sub-pmd-ranges.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2021-03-10 22:54 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-10 22:54 [to-be-updated] x86-vmemmap-drop-handling-of-1gb-vmemmap-ranges.patch removed from -mm tree akpm
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.