* [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c
@ 2023-03-21 17:04 Mike Rapoport
2023-03-21 17:05 ` [PATCH v2 01/14] mips: fix comment about pgtable_init() Mike Rapoport
` (14 more replies)
0 siblings, 15 replies; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:04 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
Also in git:
https://git.kernel.org/rppt/h/mm-init/v2
v2:
* move init_cma_reserved_pageblock() from cma.c to mm_init.c
* rename init_mem_debugging_and_hardening() to
mem_debugging_and_hardening_init()
* inline pgtable_init() into mem_core_init()
* add Acked and Reviewed tags (thanks David, hopefully I've picked them
right)
v1: https://lore.kernel.org/all/20230319220008.2138576-1-rppt@kernel.org
This set moves most of the core MM initialization to mm/mm_init.c.
This largely includes free_area_init() and its helpers, functions used at
boot time, mm_init() from init/main.c and some of the functions it calls.
Aside from gaining some more space before mm/page_alloc.c hits 10k lines,
this makes mm/page_alloc.c to be mostly about buddy allocator and moves the
init code out of the way, which IMO improves maintainability.
Besides, this allows to move a couple of declarations out of include/linux
and make them private to mm/.
And as an added bonus there a slight decrease in vmlinux size.
For tinyconfig and defconfig on x86 I've got
tinyconfig:
text data bss dec hex filename
853206 289376 1200128 2342710 23bf36 a/vmlinux
853198 289344 1200128 2342670 23bf0e b/vmlinux
defconfig:
text data bss dec hex filename
26152959 9730634 2170884 38054477 244aa4d a/vmlinux
26152945 9730602 2170884 38054431 244aa1f b/vmlinux
Mike Rapoport (IBM) (14):
mips: fix comment about pgtable_init()
mm/page_alloc: add helper for checking if check_pages_enabled
mm: move most of core MM initialization to mm/mm_init.c
mm: handle hashdist initialization in mm/mm_init.c
mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp()
init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init()
init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init()
mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init()
mm: move init_mem_debugging_and_hardening() to mm/mm_init.c
init,mm: fold late call to page_ext_init() to page_alloc_init_late()
mm: move mem_init_print_info() to mm_init.c
mm: move kmem_cache_init() declaration to mm/slab.h
mm: move vmalloc_init() declaration to mm/internal.h
MAINTAINERS: extend memblock entry to include MM initialization
MAINTAINERS | 3 +-
arch/mips/include/asm/fixmap.h | 2 +-
include/linux/gfp.h | 7 +-
include/linux/mm.h | 9 +-
include/linux/page_ext.h | 2 -
include/linux/slab.h | 1 -
include/linux/vmalloc.h | 4 -
init/main.c | 74 +-
mm/cma.c | 1 +
mm/internal.h | 52 +-
mm/mm_init.c | 2547 +++++++++++++++++++++++++++
mm/page_alloc.c | 2981 +++-----------------------------
mm/slab.h | 1 +
13 files changed, 2856 insertions(+), 2828 deletions(-)
base-commit: 4018ab1f7cec061b8425737328edefebdc0ab832
--
2.35.1
^ permalink raw reply [flat|nested] 35+ messages in thread
* [PATCH v2 01/14] mips: fix comment about pgtable_init()
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 11:36 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 02/14] mm/page_alloc: add helper for checking if check_pages_enabled Mike Rapoport
` (13 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
Comment about fixrange_init() says that its called from pgtable_init()
while the actual caller is pagetabe_init().
Update comment to match the code.
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
arch/mips/include/asm/fixmap.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/mips/include/asm/fixmap.h b/arch/mips/include/asm/fixmap.h
index beea14761cef..b037718d7e8b 100644
--- a/arch/mips/include/asm/fixmap.h
+++ b/arch/mips/include/asm/fixmap.h
@@ -70,7 +70,7 @@ enum fixed_addresses {
#include <asm-generic/fixmap.h>
/*
- * Called from pgtable_init()
+ * Called from pagetable_init()
*/
extern void fixrange_init(unsigned long start, unsigned long end,
pgd_t *pgd_base);
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 02/14] mm/page_alloc: add helper for checking if check_pages_enabled
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
2023-03-21 17:05 ` [PATCH v2 01/14] mips: fix comment about pgtable_init() Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 11:38 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c Mike Rapoport
` (12 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
Instead of duplicating long static_branch_enabled(&check_pages_enabled)
wrap it in a helper function is_check_pages_enabled()
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
mm/page_alloc.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 87d760236dba..e1149d54d738 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -245,6 +245,11 @@ EXPORT_SYMBOL(init_on_free);
/* perform sanity checks on struct pages being allocated or freed */
static DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
+static inline bool is_check_pages_enabled(void)
+{
+ return static_branch_unlikely(&check_pages_enabled);
+}
+
static bool _init_on_alloc_enabled_early __read_mostly
= IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON);
static int __init early_init_on_alloc(char *buf)
@@ -1443,7 +1448,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
for (i = 1; i < (1 << order); i++) {
if (compound)
bad += free_tail_pages_check(page, page + i);
- if (static_branch_unlikely(&check_pages_enabled)) {
+ if (is_check_pages_enabled()) {
if (unlikely(free_page_is_bad(page + i))) {
bad++;
continue;
@@ -1456,7 +1461,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
page->mapping = NULL;
if (memcg_kmem_online() && PageMemcgKmem(page))
__memcg_kmem_uncharge_page(page, order);
- if (static_branch_unlikely(&check_pages_enabled)) {
+ if (is_check_pages_enabled()) {
if (free_page_is_bad(page))
bad++;
if (bad)
@@ -2366,7 +2371,7 @@ static int check_new_page(struct page *page)
static inline bool check_new_pages(struct page *page, unsigned int order)
{
- if (static_branch_unlikely(&check_pages_enabled)) {
+ if (is_check_pages_enabled()) {
for (int i = 0; i < (1 << order); i++) {
struct page *p = page + i;
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
2023-03-21 17:05 ` [PATCH v2 01/14] mips: fix comment about pgtable_init() Mike Rapoport
2023-03-21 17:05 ` [PATCH v2 02/14] mm/page_alloc: add helper for checking if check_pages_enabled Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 14:49 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 05/14] mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp() Mike Rapoport
` (11 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
The hashdist variable must be initialized before the first call to
alloc_large_system_hash() and free_area_init() looks like a better place
for it than page_alloc_init().
Move hashdist handling to mm/mm_init.c
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
---
mm/mm_init.c | 22 ++++++++++++++++++++++
mm/page_alloc.c | 18 ------------------
2 files changed, 22 insertions(+), 18 deletions(-)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 68d0187c7886..2e60c7186132 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -607,6 +607,25 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
return nid;
}
+
+int hashdist = HASHDIST_DEFAULT;
+
+static int __init set_hashdist(char *str)
+{
+ if (!str)
+ return 0;
+ hashdist = simple_strtoul(str, &str, 0);
+ return 1;
+}
+__setup("hashdist=", set_hashdist);
+
+static inline void fixup_hashdist(void)
+{
+ if (num_node_state(N_MEMORY) == 1)
+ hashdist = 0;
+}
+#else
+static inline void fixup_hashdist(void) {}
#endif /* CONFIG_NUMA */
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
@@ -1855,6 +1874,9 @@ void __init free_area_init(unsigned long *max_zone_pfn)
}
memmap_init();
+
+ /* disable hash distribution for systems with a single node */
+ fixup_hashdist();
}
/**
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c56c147bdf27..ff6a2fff2880 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6383,28 +6383,10 @@ static int page_alloc_cpu_online(unsigned int cpu)
return 0;
}
-#ifdef CONFIG_NUMA
-int hashdist = HASHDIST_DEFAULT;
-
-static int __init set_hashdist(char *str)
-{
- if (!str)
- return 0;
- hashdist = simple_strtoul(str, &str, 0);
- return 1;
-}
-__setup("hashdist=", set_hashdist);
-#endif
-
void __init page_alloc_init(void)
{
int ret;
-#ifdef CONFIG_NUMA
- if (num_node_state(N_MEMORY) == 1)
- hashdist = 0;
-#endif
-
ret = cpuhp_setup_state_nocalls(CPUHP_PAGE_ALLOC,
"mm/page_alloc:pcp",
page_alloc_cpu_online,
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 05/14] mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp()
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (2 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 14:50 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init() Mike Rapoport
` (10 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
The page_alloc_init() name is really misleading because all this
function does is sets up CPU hotplug callbacks for the page allocator.
Rename it to page_alloc_init_cpuhp() so that name will reflect what the
function does.
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
include/linux/gfp.h | 2 +-
init/main.c | 2 +-
mm/page_alloc.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 7c554e4bd49f..ed8cb537c6a7 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -319,7 +319,7 @@ extern void page_frag_free(void *addr);
#define __free_page(page) __free_pages((page), 0)
#define free_page(addr) free_pages((addr), 0)
-void page_alloc_init(void);
+void page_alloc_init_cpuhp(void);
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp);
void drain_all_pages(struct zone *zone);
void drain_local_pages(struct zone *zone);
diff --git a/init/main.c b/init/main.c
index 4425d1783d5c..b2499bee7a3c 100644
--- a/init/main.c
+++ b/init/main.c
@@ -969,7 +969,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
boot_cpu_hotplug_init();
build_all_zonelists(NULL);
- page_alloc_init();
+ page_alloc_init_cpuhp();
pr_notice("Kernel command line: %s\n", saved_command_line);
/* parameters may set static keys */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ff6a2fff2880..d1276bfe7a30 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6383,7 +6383,7 @@ static int page_alloc_cpu_online(unsigned int cpu)
return 0;
}
-void __init page_alloc_init(void)
+void __init page_alloc_init_cpuhp(void)
{
int ret;
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init()
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (3 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 05/14] mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp() Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:10 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 07/14] init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init() Mike Rapoport
` (9 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
Both build_all_zonelists() and page_alloc_init_cpuhp() must be called
after SMP setup is complete but before the page allocator is set up.
Still, they both are a part of memory management initialization, so move
them to mm_init().
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
---
init/main.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/init/main.c b/init/main.c
index b2499bee7a3c..4423906177c1 100644
--- a/init/main.c
+++ b/init/main.c
@@ -833,6 +833,10 @@ static void __init report_meminit(void)
*/
static void __init mm_init(void)
{
+ /* Initializations relying on SMP setup */
+ build_all_zonelists(NULL);
+ page_alloc_init_cpuhp();
+
/*
* page_ext requires contiguous pages,
* bigger than MAX_ORDER unless SPARSEMEM.
@@ -968,9 +972,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */
boot_cpu_hotplug_init();
- build_all_zonelists(NULL);
- page_alloc_init_cpuhp();
-
pr_notice("Kernel command line: %s\n", saved_command_line);
/* parameters may set static keys */
jump_label_init();
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 07/14] init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init()
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (4 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init() Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:24 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init() Mike Rapoport
` (8 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
Make mm_init() a part of mm/ codebase. mm_core_init() better describes
what the function does and does not clash with mm_init() in kernel/fork.c
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
---
include/linux/mm.h | 1 +
init/main.c | 71 ++------------------------------------------
mm/mm_init.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 76 insertions(+), 69 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ee755bb4e1c1..2d7f095136fc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -39,6 +39,7 @@ struct pt_regs;
extern int sysctl_page_lock_unfairness;
+void mm_core_init(void);
void init_mm_internals(void);
#ifndef CONFIG_NUMA /* Don't use mapnrs, do it properly */
diff --git a/init/main.c b/init/main.c
index 4423906177c1..8a20b4c25f24 100644
--- a/init/main.c
+++ b/init/main.c
@@ -803,73 +803,6 @@ static inline void initcall_debug_enable(void)
}
#endif
-/* Report memory auto-initialization states for this boot. */
-static void __init report_meminit(void)
-{
- const char *stack;
-
- if (IS_ENABLED(CONFIG_INIT_STACK_ALL_PATTERN))
- stack = "all(pattern)";
- else if (IS_ENABLED(CONFIG_INIT_STACK_ALL_ZERO))
- stack = "all(zero)";
- else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL))
- stack = "byref_all(zero)";
- else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF))
- stack = "byref(zero)";
- else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_USER))
- stack = "__user(zero)";
- else
- stack = "off";
-
- pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n",
- stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off",
- want_init_on_free() ? "on" : "off");
- if (want_init_on_free())
- pr_info("mem auto-init: clearing system memory may take some time...\n");
-}
-
-/*
- * Set up kernel memory allocators
- */
-static void __init mm_init(void)
-{
- /* Initializations relying on SMP setup */
- build_all_zonelists(NULL);
- page_alloc_init_cpuhp();
-
- /*
- * page_ext requires contiguous pages,
- * bigger than MAX_ORDER unless SPARSEMEM.
- */
- page_ext_init_flatmem();
- init_mem_debugging_and_hardening();
- kfence_alloc_pool();
- report_meminit();
- kmsan_init_shadow();
- stack_depot_early_init();
- mem_init();
- mem_init_print_info();
- kmem_cache_init();
- /*
- * page_owner must be initialized after buddy is ready, and also after
- * slab is ready so that stack_depot_init() works properly
- */
- page_ext_init_flatmem_late();
- kmemleak_init();
- pgtable_init();
- debug_objects_mem_init();
- vmalloc_init();
- /* If no deferred init page_ext now, as vmap is fully initialized */
- if (!deferred_struct_pages)
- page_ext_init();
- /* Should be run before the first non-init thread is created */
- init_espfix_bsp();
- /* Should be run after espfix64 is set up. */
- pti_init();
- kmsan_init_runtime();
- mm_cache_init();
-}
-
#ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET
DEFINE_STATIC_KEY_MAYBE_RO(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT,
randomize_kstack_offset);
@@ -993,13 +926,13 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
/*
* These use large bootmem allocations and must precede
- * kmem_cache_init()
+ * initalization of page allocator
*/
setup_log_buf(0);
vfs_caches_init_early();
sort_main_extable();
trap_init();
- mm_init();
+ mm_core_init();
poking_init();
ftrace_init();
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 2e60c7186132..bba73f1fb277 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -20,9 +20,15 @@
#include <linux/nmi.h>
#include <linux/buffer_head.h>
#include <linux/kmemleak.h>
+#include <linux/kfence.h>
+#include <linux/page_ext.h>
+#include <linux/pti.h>
+#include <linux/pgtable.h>
#include "internal.h"
#include "shuffle.h"
+#include <asm/setup.h>
+
#ifdef CONFIG_DEBUG_MEMORY_INIT
int __meminitdata mminit_loglevel;
@@ -2524,3 +2530,70 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn,
}
__free_pages_core(page, order);
}
+
+/* Report memory auto-initialization states for this boot. */
+static void __init report_meminit(void)
+{
+ const char *stack;
+
+ if (IS_ENABLED(CONFIG_INIT_STACK_ALL_PATTERN))
+ stack = "all(pattern)";
+ else if (IS_ENABLED(CONFIG_INIT_STACK_ALL_ZERO))
+ stack = "all(zero)";
+ else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL))
+ stack = "byref_all(zero)";
+ else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF))
+ stack = "byref(zero)";
+ else if (IS_ENABLED(CONFIG_GCC_PLUGIN_STRUCTLEAK_USER))
+ stack = "__user(zero)";
+ else
+ stack = "off";
+
+ pr_info("mem auto-init: stack:%s, heap alloc:%s, heap free:%s\n",
+ stack, want_init_on_alloc(GFP_KERNEL) ? "on" : "off",
+ want_init_on_free() ? "on" : "off");
+ if (want_init_on_free())
+ pr_info("mem auto-init: clearing system memory may take some time...\n");
+}
+
+/*
+ * Set up kernel memory allocators
+ */
+void __init mm_core_init(void)
+{
+ /* Initializations relying on SMP setup */
+ build_all_zonelists(NULL);
+ page_alloc_init_cpuhp();
+
+ /*
+ * page_ext requires contiguous pages,
+ * bigger than MAX_ORDER unless SPARSEMEM.
+ */
+ page_ext_init_flatmem();
+ init_mem_debugging_and_hardening();
+ kfence_alloc_pool();
+ report_meminit();
+ kmsan_init_shadow();
+ stack_depot_early_init();
+ mem_init();
+ mem_init_print_info();
+ kmem_cache_init();
+ /*
+ * page_owner must be initialized after buddy is ready, and also after
+ * slab is ready so that stack_depot_init() works properly
+ */
+ page_ext_init_flatmem_late();
+ kmemleak_init();
+ pgtable_init();
+ debug_objects_mem_init();
+ vmalloc_init();
+ /* If no deferred init page_ext now, as vmap is fully initialized */
+ if (!deferred_struct_pages)
+ page_ext_init();
+ /* Should be run before the first non-init thread is created */
+ init_espfix_bsp();
+ /* Should be run after espfix64 is set up. */
+ pti_init();
+ kmsan_init_runtime();
+ mm_cache_init();
+}
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init()
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (5 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 07/14] init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init() Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 9:06 ` Sergei Shtylyov
2023-03-21 17:05 ` [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c Mike Rapoport
` (7 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
and drop pgtable_init() as it has no real value and it's name is
misleading.
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
---
include/linux/mm.h | 6 ------
mm/mm_init.c | 3 ++-
2 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2d7f095136fc..c3c67d8bc833 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2782,12 +2782,6 @@ static inline bool ptlock_init(struct page *page) { return true; }
static inline void ptlock_free(struct page *page) {}
#endif /* USE_SPLIT_PTE_PTLOCKS */
-static inline void pgtable_init(void)
-{
- ptlock_cache_init();
- pgtable_cache_init();
-}
-
static inline bool pgtable_pte_page_ctor(struct page *page)
{
if (!ptlock_init(page))
diff --git a/mm/mm_init.c b/mm/mm_init.c
index bba73f1fb277..f1475413394d 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2584,7 +2584,8 @@ void __init mm_core_init(void)
*/
page_ext_init_flatmem_late();
kmemleak_init();
- pgtable_init();
+ ptlock_cache_init();
+ pgtable_cache_init();
debug_objects_mem_init();
vmalloc_init();
/* If no deferred init page_ext now, as vmap is fully initialized */
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (6 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init() Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:28 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 10/14] init,mm: fold late call to page_ext_init() to page_alloc_init_late() Mike Rapoport
` (6 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
init_mem_debugging_and_hardening() is only called from mm_core_init().
Move it close to the caller, make it static and rename it to
mem_debugging_and_hardening_init() for consistency with surrounding
convention.
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
---
include/linux/mm.h | 1 -
mm/internal.h | 8 ++++
mm/mm_init.c | 91 +++++++++++++++++++++++++++++++++++++++++++-
mm/page_alloc.c | 95 ----------------------------------------------
4 files changed, 98 insertions(+), 97 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c3c67d8bc833..2fecabb1a328 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3394,7 +3394,6 @@ extern int apply_to_existing_page_range(struct mm_struct *mm,
unsigned long address, unsigned long size,
pte_fn_t fn, void *data);
-extern void __init init_mem_debugging_and_hardening(void);
#ifdef CONFIG_PAGE_POISONING
extern void __kernel_poison_pages(struct page *page, int numpages);
extern void __kernel_unpoison_pages(struct page *page, int numpages);
diff --git a/mm/internal.h b/mm/internal.h
index 2a925de49393..4750e3a7fd0d 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -204,6 +204,14 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
extern char * const zone_names[MAX_NR_ZONES];
+/* perform sanity checks on struct pages being allocated or freed */
+DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
+
+static inline bool is_check_pages_enabled(void)
+{
+ return static_branch_unlikely(&check_pages_enabled);
+}
+
/*
* Structure for holding the mostly immutable allocation parameters passed
* between functions involved in allocations, including the alloc_pages*
diff --git a/mm/mm_init.c b/mm/mm_init.c
index f1475413394d..43f6d3ed24ef 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2531,6 +2531,95 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn,
__free_pages_core(page, order);
}
+static bool _init_on_alloc_enabled_early __read_mostly
+ = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON);
+static int __init early_init_on_alloc(char *buf)
+{
+
+ return kstrtobool(buf, &_init_on_alloc_enabled_early);
+}
+early_param("init_on_alloc", early_init_on_alloc);
+
+static bool _init_on_free_enabled_early __read_mostly
+ = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON);
+static int __init early_init_on_free(char *buf)
+{
+ return kstrtobool(buf, &_init_on_free_enabled_early);
+}
+early_param("init_on_free", early_init_on_free);
+
+DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
+
+/*
+ * Enable static keys related to various memory debugging and hardening options.
+ * Some override others, and depend on early params that are evaluated in the
+ * order of appearance. So we need to first gather the full picture of what was
+ * enabled, and then make decisions.
+ */
+static void __init mem_debugging_and_hardening_init(void)
+{
+ bool page_poisoning_requested = false;
+ bool want_check_pages = false;
+
+#ifdef CONFIG_PAGE_POISONING
+ /*
+ * Page poisoning is debug page alloc for some arches. If
+ * either of those options are enabled, enable poisoning.
+ */
+ if (page_poisoning_enabled() ||
+ (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
+ debug_pagealloc_enabled())) {
+ static_branch_enable(&_page_poisoning_enabled);
+ page_poisoning_requested = true;
+ want_check_pages = true;
+ }
+#endif
+
+ if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) &&
+ page_poisoning_requested) {
+ pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
+ "will take precedence over init_on_alloc and init_on_free\n");
+ _init_on_alloc_enabled_early = false;
+ _init_on_free_enabled_early = false;
+ }
+
+ if (_init_on_alloc_enabled_early) {
+ want_check_pages = true;
+ static_branch_enable(&init_on_alloc);
+ } else {
+ static_branch_disable(&init_on_alloc);
+ }
+
+ if (_init_on_free_enabled_early) {
+ want_check_pages = true;
+ static_branch_enable(&init_on_free);
+ } else {
+ static_branch_disable(&init_on_free);
+ }
+
+ if (IS_ENABLED(CONFIG_KMSAN) &&
+ (_init_on_alloc_enabled_early || _init_on_free_enabled_early))
+ pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n");
+
+#ifdef CONFIG_DEBUG_PAGEALLOC
+ if (debug_pagealloc_enabled()) {
+ want_check_pages = true;
+ static_branch_enable(&_debug_pagealloc_enabled);
+
+ if (debug_guardpage_minorder())
+ static_branch_enable(&_debug_guardpage_enabled);
+ }
+#endif
+
+ /*
+ * Any page debugging or hardening option also enables sanity checking
+ * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's
+ * enabled already.
+ */
+ if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages)
+ static_branch_enable(&check_pages_enabled);
+}
+
/* Report memory auto-initialization states for this boot. */
static void __init report_meminit(void)
{
@@ -2570,7 +2659,7 @@ void __init mm_core_init(void)
* bigger than MAX_ORDER unless SPARSEMEM.
*/
page_ext_init_flatmem();
- init_mem_debugging_and_hardening();
+ mem_debugging_and_hardening_init();
kfence_alloc_pool();
report_meminit();
kmsan_init_shadow();
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d1276bfe7a30..2f333c26170c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -240,31 +240,6 @@ EXPORT_SYMBOL(init_on_alloc);
DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free);
EXPORT_SYMBOL(init_on_free);
-/* perform sanity checks on struct pages being allocated or freed */
-static DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
-
-static inline bool is_check_pages_enabled(void)
-{
- return static_branch_unlikely(&check_pages_enabled);
-}
-
-static bool _init_on_alloc_enabled_early __read_mostly
- = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON);
-static int __init early_init_on_alloc(char *buf)
-{
-
- return kstrtobool(buf, &_init_on_alloc_enabled_early);
-}
-early_param("init_on_alloc", early_init_on_alloc);
-
-static bool _init_on_free_enabled_early __read_mostly
- = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON);
-static int __init early_init_on_free(char *buf)
-{
- return kstrtobool(buf, &_init_on_free_enabled_early);
-}
-early_param("init_on_free", early_init_on_free);
-
/*
* A cached value of the page's pageblock's migratetype, used when the page is
* put on a pcplist. Used to avoid the pageblock migratetype lookup when
@@ -798,76 +773,6 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
unsigned int order, int migratetype) {}
#endif
-/*
- * Enable static keys related to various memory debugging and hardening options.
- * Some override others, and depend on early params that are evaluated in the
- * order of appearance. So we need to first gather the full picture of what was
- * enabled, and then make decisions.
- */
-void __init init_mem_debugging_and_hardening(void)
-{
- bool page_poisoning_requested = false;
- bool want_check_pages = false;
-
-#ifdef CONFIG_PAGE_POISONING
- /*
- * Page poisoning is debug page alloc for some arches. If
- * either of those options are enabled, enable poisoning.
- */
- if (page_poisoning_enabled() ||
- (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
- debug_pagealloc_enabled())) {
- static_branch_enable(&_page_poisoning_enabled);
- page_poisoning_requested = true;
- want_check_pages = true;
- }
-#endif
-
- if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) &&
- page_poisoning_requested) {
- pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
- "will take precedence over init_on_alloc and init_on_free\n");
- _init_on_alloc_enabled_early = false;
- _init_on_free_enabled_early = false;
- }
-
- if (_init_on_alloc_enabled_early) {
- want_check_pages = true;
- static_branch_enable(&init_on_alloc);
- } else {
- static_branch_disable(&init_on_alloc);
- }
-
- if (_init_on_free_enabled_early) {
- want_check_pages = true;
- static_branch_enable(&init_on_free);
- } else {
- static_branch_disable(&init_on_free);
- }
-
- if (IS_ENABLED(CONFIG_KMSAN) &&
- (_init_on_alloc_enabled_early || _init_on_free_enabled_early))
- pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n");
-
-#ifdef CONFIG_DEBUG_PAGEALLOC
- if (debug_pagealloc_enabled()) {
- want_check_pages = true;
- static_branch_enable(&_debug_pagealloc_enabled);
-
- if (debug_guardpage_minorder())
- static_branch_enable(&_debug_guardpage_enabled);
- }
-#endif
-
- /*
- * Any page debugging or hardening option also enables sanity checking
- * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's
- * enabled already.
- */
- if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages)
- static_branch_enable(&check_pages_enabled);
-}
-
static inline void set_buddy_order(struct page *page, unsigned int order)
{
set_page_private(page, order);
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 10/14] init,mm: fold late call to page_ext_init() to page_alloc_init_late()
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (7 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:30 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 11/14] mm: move mem_init_print_info() to mm_init.c Mike Rapoport
` (5 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
When deferred initialization of struct pages is enabled, page_ext_init()
must be called after all the deferred initialization is done, but there
is no point to keep it a separate call from kernel_init_freeable() right
after page_alloc_init_late().
Fold the call to page_ext_init() into page_alloc_init_late() and
localize deferred_struct_pages variable.
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
include/linux/page_ext.h | 2 --
init/main.c | 4 ----
mm/mm_init.c | 6 +++++-
3 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index bc2e39090a1f..67314f648aeb 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -29,8 +29,6 @@ struct page_ext_operations {
bool need_shared_flags;
};
-extern bool deferred_struct_pages;
-
#ifdef CONFIG_PAGE_EXTENSION
/*
diff --git a/init/main.c b/init/main.c
index 8a20b4c25f24..04113514e56a 100644
--- a/init/main.c
+++ b/init/main.c
@@ -62,7 +62,6 @@
#include <linux/rmap.h>
#include <linux/mempolicy.h>
#include <linux/key.h>
-#include <linux/page_ext.h>
#include <linux/debug_locks.h>
#include <linux/debugobjects.h>
#include <linux/lockdep.h>
@@ -1561,9 +1560,6 @@ static noinline void __init kernel_init_freeable(void)
padata_init();
page_alloc_init_late();
- /* Initialize page ext after all struct pages are initialized. */
- if (deferred_struct_pages)
- page_ext_init();
do_basic_setup();
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 43f6d3ed24ef..ff70da11e797 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -225,7 +225,7 @@ static unsigned long nr_kernel_pages __initdata;
static unsigned long nr_all_pages __initdata;
static unsigned long dma_reserve __initdata;
-bool deferred_struct_pages __meminitdata;
+static bool deferred_struct_pages __meminitdata;
static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats);
@@ -2358,6 +2358,10 @@ void __init page_alloc_init_late(void)
for_each_populated_zone(zone)
set_zone_contiguous(zone);
+
+ /* Initialize page ext after all struct pages are initialized. */
+ if (deferred_struct_pages)
+ page_ext_init();
}
#ifndef __HAVE_ARCH_RESERVED_KERNEL_PAGES
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 11/14] mm: move mem_init_print_info() to mm_init.c
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (8 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 10/14] init,mm: fold late call to page_ext_init() to page_alloc_init_late() Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:32 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 12/14] mm: move kmem_cache_init() declaration to mm/slab.h Mike Rapoport
` (4 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
mem_init_print_info() is only called from mm_core_init().
Move it close to the caller and make it static.
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
---
include/linux/mm.h | 1 -
mm/internal.h | 1 +
mm/mm_init.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++
mm/page_alloc.c | 53 ----------------------------------------------
4 files changed, 54 insertions(+), 54 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2fecabb1a328..e249208f8fbe 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2925,7 +2925,6 @@ extern unsigned long free_reserved_area(void *start, void *end,
int poison, const char *s);
extern void adjust_managed_page_count(struct page *page, long count);
-extern void mem_init_print_info(void);
extern void reserve_bootmem_region(phys_addr_t start, phys_addr_t end);
diff --git a/mm/internal.h b/mm/internal.h
index 4750e3a7fd0d..02273c5e971f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -201,6 +201,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
/*
* in mm/page_alloc.c
*/
+#define K(x) ((x) << (PAGE_SHIFT-10))
extern char * const zone_names[MAX_NR_ZONES];
diff --git a/mm/mm_init.c b/mm/mm_init.c
index ff70da11e797..8adadf51bbd2 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -24,6 +24,8 @@
#include <linux/page_ext.h>
#include <linux/pti.h>
#include <linux/pgtable.h>
+#include <linux/swap.h>
+#include <linux/cma.h>
#include "internal.h"
#include "shuffle.h"
@@ -2649,6 +2651,57 @@ static void __init report_meminit(void)
pr_info("mem auto-init: clearing system memory may take some time...\n");
}
+static void __init mem_init_print_info(void)
+{
+ unsigned long physpages, codesize, datasize, rosize, bss_size;
+ unsigned long init_code_size, init_data_size;
+
+ physpages = get_num_physpages();
+ codesize = _etext - _stext;
+ datasize = _edata - _sdata;
+ rosize = __end_rodata - __start_rodata;
+ bss_size = __bss_stop - __bss_start;
+ init_data_size = __init_end - __init_begin;
+ init_code_size = _einittext - _sinittext;
+
+ /*
+ * Detect special cases and adjust section sizes accordingly:
+ * 1) .init.* may be embedded into .data sections
+ * 2) .init.text.* may be out of [__init_begin, __init_end],
+ * please refer to arch/tile/kernel/vmlinux.lds.S.
+ * 3) .rodata.* may be embedded into .text or .data sections.
+ */
+#define adj_init_size(start, end, size, pos, adj) \
+ do { \
+ if (&start[0] <= &pos[0] && &pos[0] < &end[0] && size > adj) \
+ size -= adj; \
+ } while (0)
+
+ adj_init_size(__init_begin, __init_end, init_data_size,
+ _sinittext, init_code_size);
+ adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size);
+ adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size);
+ adj_init_size(_stext, _etext, codesize, __start_rodata, rosize);
+ adj_init_size(_sdata, _edata, datasize, __start_rodata, rosize);
+
+#undef adj_init_size
+
+ pr_info("Memory: %luK/%luK available (%luK kernel code, %luK rwdata, %luK rodata, %luK init, %luK bss, %luK reserved, %luK cma-reserved"
+#ifdef CONFIG_HIGHMEM
+ ", %luK highmem"
+#endif
+ ")\n",
+ K(nr_free_pages()), K(physpages),
+ codesize / SZ_1K, datasize / SZ_1K, rosize / SZ_1K,
+ (init_data_size + init_code_size) / SZ_1K, bss_size / SZ_1K,
+ K(physpages - totalram_pages() - totalcma_pages),
+ K(totalcma_pages)
+#ifdef CONFIG_HIGHMEM
+ , K(totalhigh_pages())
+#endif
+ );
+}
+
/*
* Set up kernel memory allocators
*/
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2f333c26170c..bb0099f7da93 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5239,8 +5239,6 @@ static bool show_mem_node_skip(unsigned int flags, int nid, nodemask_t *nodemask
return !node_isset(nid, *nodemask);
}
-#define K(x) ((x) << (PAGE_SHIFT-10))
-
static void show_migration_types(unsigned char type)
{
static const char types[MIGRATE_TYPES] = {
@@ -6200,57 +6198,6 @@ unsigned long free_reserved_area(void *start, void *end, int poison, const char
return pages;
}
-void __init mem_init_print_info(void)
-{
- unsigned long physpages, codesize, datasize, rosize, bss_size;
- unsigned long init_code_size, init_data_size;
-
- physpages = get_num_physpages();
- codesize = _etext - _stext;
- datasize = _edata - _sdata;
- rosize = __end_rodata - __start_rodata;
- bss_size = __bss_stop - __bss_start;
- init_data_size = __init_end - __init_begin;
- init_code_size = _einittext - _sinittext;
-
- /*
- * Detect special cases and adjust section sizes accordingly:
- * 1) .init.* may be embedded into .data sections
- * 2) .init.text.* may be out of [__init_begin, __init_end],
- * please refer to arch/tile/kernel/vmlinux.lds.S.
- * 3) .rodata.* may be embedded into .text or .data sections.
- */
-#define adj_init_size(start, end, size, pos, adj) \
- do { \
- if (&start[0] <= &pos[0] && &pos[0] < &end[0] && size > adj) \
- size -= adj; \
- } while (0)
-
- adj_init_size(__init_begin, __init_end, init_data_size,
- _sinittext, init_code_size);
- adj_init_size(_stext, _etext, codesize, _sinittext, init_code_size);
- adj_init_size(_sdata, _edata, datasize, __init_begin, init_data_size);
- adj_init_size(_stext, _etext, codesize, __start_rodata, rosize);
- adj_init_size(_sdata, _edata, datasize, __start_rodata, rosize);
-
-#undef adj_init_size
-
- pr_info("Memory: %luK/%luK available (%luK kernel code, %luK rwdata, %luK rodata, %luK init, %luK bss, %luK reserved, %luK cma-reserved"
-#ifdef CONFIG_HIGHMEM
- ", %luK highmem"
-#endif
- ")\n",
- K(nr_free_pages()), K(physpages),
- codesize / SZ_1K, datasize / SZ_1K, rosize / SZ_1K,
- (init_data_size + init_code_size) / SZ_1K, bss_size / SZ_1K,
- K(physpages - totalram_pages() - totalcma_pages),
- K(totalcma_pages)
-#ifdef CONFIG_HIGHMEM
- , K(totalhigh_pages())
-#endif
- );
-}
-
static int page_alloc_cpu_dead(unsigned int cpu)
{
struct zone *zone;
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 12/14] mm: move kmem_cache_init() declaration to mm/slab.h
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (9 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 11/14] mm: move mem_init_print_info() to mm_init.c Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:33 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 13/14] mm: move vmalloc_init() declaration to mm/internal.h Mike Rapoport
` (3 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
kmem_cache_init() is called only from mm_core_init(), there is no need
to declare it in include/linux/slab.h
Move kmem_cache_init() declaration to mm/slab.h
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
include/linux/slab.h | 1 -
mm/mm_init.c | 1 +
mm/slab.h | 1 +
3 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/include/linux/slab.h b/include/linux/slab.h
index aa4575ef2965..f8b1d63c63a3 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -167,7 +167,6 @@ struct mem_cgroup;
/*
* struct kmem_cache related prototypes
*/
-void __init kmem_cache_init(void);
bool slab_is_available(void);
struct kmem_cache *kmem_cache_create(const char *name, unsigned int size,
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 8adadf51bbd2..53fb8e9d1e3b 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -27,6 +27,7 @@
#include <linux/swap.h>
#include <linux/cma.h>
#include "internal.h"
+#include "slab.h"
#include "shuffle.h"
#include <asm/setup.h>
diff --git a/mm/slab.h b/mm/slab.h
index 43966aa5fadf..3f8df2244f5a 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -4,6 +4,7 @@
/*
* Internal slab definitions
*/
+void __init kmem_cache_init(void);
/* Reuses the bits in struct page */
struct slab {
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 13/14] mm: move vmalloc_init() declaration to mm/internal.h
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (10 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 12/14] mm: move kmem_cache_init() declaration to mm/slab.h Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:33 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 14/14] MAINTAINERS: extend memblock entry to include MM initialization Mike Rapoport
` (2 subsequent siblings)
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
vmalloc_init() is called only from mm_core_init(), there is no need to
declare it in include/linux/vmalloc.h
Move vmalloc_init() declaration to mm/internal.h
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
include/linux/vmalloc.h | 4 ----
mm/internal.h | 5 +++++
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 69250efa03d1..351fc7697214 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -131,12 +131,8 @@ extern void *vm_map_ram(struct page **pages, unsigned int count, int node);
extern void vm_unmap_aliases(void);
#ifdef CONFIG_MMU
-extern void __init vmalloc_init(void);
extern unsigned long vmalloc_nr_pages(void);
#else
-static inline void vmalloc_init(void)
-{
-}
static inline unsigned long vmalloc_nr_pages(void) { return 0; }
#endif
diff --git a/mm/internal.h b/mm/internal.h
index 02273c5e971f..c05ad651b515 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -900,9 +900,14 @@ size_t splice_folio_into_pipe(struct pipe_inode_info *pipe,
* mm/vmalloc.c
*/
#ifdef CONFIG_MMU
+void __init vmalloc_init(void);
int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
pgprot_t prot, struct page **pages, unsigned int page_shift);
#else
+static inline void vmalloc_init(void)
+{
+}
+
static inline
int vmap_pages_range_noflush(unsigned long addr, unsigned long end,
pgprot_t prot, struct page **pages, unsigned int page_shift)
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* [PATCH v2 14/14] MAINTAINERS: extend memblock entry to include MM initialization
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (11 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 13/14] mm: move vmalloc_init() declaration to mm/internal.h Mike Rapoport
@ 2023-03-21 17:05 ` Mike Rapoport
2023-03-22 16:34 ` Vlastimil Babka
2023-03-22 11:19 ` [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c David Hildenbrand
[not found] ` <20230321170513.2401534-4-rppt@kernel.org>
14 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-21 17:05 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Mike Rapoport, Thomas Bogendoerfer,
Vlastimil Babka, linux-kernel, linux-mips, linux-mm
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
and add mm/mm_init.c to memblock entry in MAINTAINERS
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
MAINTAINERS | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 7002a5d3eb62..b79463ea1049 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13368,13 +13368,14 @@ F: arch/powerpc/include/asm/membarrier.h
F: include/uapi/linux/membarrier.h
F: kernel/sched/membarrier.c
-MEMBLOCK
+MEMBLOCK AND MEMORY MANAGEMENT INITIALIZATION
M: Mike Rapoport <rppt@kernel.org>
L: linux-mm@kvack.org
S: Maintained
F: Documentation/core-api/boot-time-mm.rst
F: include/linux/memblock.h
F: mm/memblock.c
+F: mm/mm_init.c
F: tools/testing/memblock/
MEMORY CONTROLLER DRIVERS
--
2.35.1
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init()
2023-03-21 17:05 ` [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init() Mike Rapoport
@ 2023-03-22 9:06 ` Sergei Shtylyov
2023-03-22 10:08 ` Mike Rapoport
0 siblings, 1 reply; 35+ messages in thread
From: Sergei Shtylyov @ 2023-03-22 9:06 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, Vlastimil Babka, linux-kernel,
linux-mips, linux-mm
On 3/21/23 8:05 PM, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> and drop pgtable_init() as it has no real value and it's name is
Its name.
> misleading.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
[...]
MBR, Sergey
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init()
2023-03-22 9:06 ` Sergei Shtylyov
@ 2023-03-22 10:08 ` Mike Rapoport
2023-03-22 11:18 ` David Hildenbrand
2023-03-22 16:27 ` Vlastimil Babka
0 siblings, 2 replies; 35+ messages in thread
From: Mike Rapoport @ 2023-03-22 10:08 UTC (permalink / raw)
To: Sergei Shtylyov
Cc: Andrew Morton, David Hildenbrand, Doug Berger, Matthew Wilcox,
Mel Gorman, Michal Hocko, Thomas Bogendoerfer, Vlastimil Babka,
linux-kernel, linux-mips, linux-mm
On Wed, Mar 22, 2023 at 12:06:18PM +0300, Sergei Shtylyov wrote:
> On 3/21/23 8:05 PM, Mike Rapoport wrote:
>
> > From: "Mike Rapoport (IBM)" <rppt@kernel.org>
> >
> > and drop pgtable_init() as it has no real value and it's name is
>
> Its name.
oops :)
Andrew, can you replace this patch with the updated version, please?
From 52420723c9bfa84aa48f666330e96f9e5b2f3248 Mon Sep 17 00:00:00 2001
From: "Mike Rapoport (IBM)" <rppt@kernel.org>
Date: Sat, 18 Mar 2023 13:55:28 +0200
Subject: [PATCH v3] mm: call {ptlock,pgtable}_cache_init() directly from
mm_core_init()
and drop pgtable_init() as it has no real value and its name is
misleading.
Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
---
include/linux/mm.h | 6 ------
mm/mm_init.c | 3 ++-
2 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2d7f095136fc..c3c67d8bc833 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2782,12 +2782,6 @@ static inline bool ptlock_init(struct page *page) { return true; }
static inline void ptlock_free(struct page *page) {}
#endif /* USE_SPLIT_PTE_PTLOCKS */
-static inline void pgtable_init(void)
-{
- ptlock_cache_init();
- pgtable_cache_init();
-}
-
static inline bool pgtable_pte_page_ctor(struct page *page)
{
if (!ptlock_init(page))
diff --git a/mm/mm_init.c b/mm/mm_init.c
index bba73f1fb277..f1475413394d 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2584,7 +2584,8 @@ void __init mm_core_init(void)
*/
page_ext_init_flatmem_late();
kmemleak_init();
- pgtable_init();
+ ptlock_cache_init();
+ pgtable_cache_init();
debug_objects_mem_init();
vmalloc_init();
/* If no deferred init page_ext now, as vmap is fully initialized */
--
2.35.1
--
Sincerely yours,
Mike.
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init()
2023-03-22 10:08 ` Mike Rapoport
@ 2023-03-22 11:18 ` David Hildenbrand
2023-03-22 16:27 ` Vlastimil Babka
1 sibling, 0 replies; 35+ messages in thread
From: David Hildenbrand @ 2023-03-22 11:18 UTC (permalink / raw)
To: Mike Rapoport, Sergei Shtylyov
Cc: Andrew Morton, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, Vlastimil Babka, linux-kernel,
linux-mips, linux-mm
On 22.03.23 11:08, Mike Rapoport wrote:
> On Wed, Mar 22, 2023 at 12:06:18PM +0300, Sergei Shtylyov wrote:
>> On 3/21/23 8:05 PM, Mike Rapoport wrote:
>>
>>> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>>>
>>> and drop pgtable_init() as it has no real value and it's name is
>>
>> Its name.
>
> oops :)
>
> Andrew, can you replace this patch with the updated version, please?
>
> From 52420723c9bfa84aa48f666330e96f9e5b2f3248 Mon Sep 17 00:00:00 2001
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
> Date: Sat, 18 Mar 2023 13:55:28 +0200
> Subject: [PATCH v3] mm: call {ptlock,pgtable}_cache_init() directly from
> mm_core_init()
>
> and drop pgtable_init() as it has no real value and its name is
> misleading.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> ---
> include/linux/mm.h | 6 ------
> mm/mm_init.c | 3 ++-
> 2 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2d7f095136fc..c3c67d8bc833 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2782,12 +2782,6 @@ static inline bool ptlock_init(struct page *page) { return true; }
> static inline void ptlock_free(struct page *page) {}
> #endif /* USE_SPLIT_PTE_PTLOCKS */
>
> -static inline void pgtable_init(void)
> -{
> - ptlock_cache_init();
> - pgtable_cache_init();
> -}
> -
> static inline bool pgtable_pte_page_ctor(struct page *page)
> {
> if (!ptlock_init(page))
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index bba73f1fb277..f1475413394d 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -2584,7 +2584,8 @@ void __init mm_core_init(void)
> */
> page_ext_init_flatmem_late();
> kmemleak_init();
> - pgtable_init();
> + ptlock_cache_init();
> + pgtable_cache_init();
> debug_objects_mem_init();
> vmalloc_init();
> /* If no deferred init page_ext now, as vmap is fully initialized */
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
` (12 preceding siblings ...)
2023-03-21 17:05 ` [PATCH v2 14/14] MAINTAINERS: extend memblock entry to include MM initialization Mike Rapoport
@ 2023-03-22 11:19 ` David Hildenbrand
[not found] ` <20230321170513.2401534-4-rppt@kernel.org>
14 siblings, 0 replies; 35+ messages in thread
From: David Hildenbrand @ 2023-03-22 11:19 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: Doug Berger, Matthew Wilcox, Mel Gorman, Michal Hocko,
Thomas Bogendoerfer, Vlastimil Babka, linux-kernel, linux-mips,
linux-mm
On 21.03.23 18:04, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> Also in git:
> https://git.kernel.org/rppt/h/mm-init/v2
>
> v2:
> * move init_cma_reserved_pageblock() from cma.c to mm_init.c
> * rename init_mem_debugging_and_hardening() to
> mem_debugging_and_hardening_init()
> * inline pgtable_init() into mem_core_init()
> * add Acked and Reviewed tags (thanks David, hopefully I've picked them
> right)
Sorry, I get lazy on large patches and only ACK instead of checking each
and every line :)
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 01/14] mips: fix comment about pgtable_init()
2023-03-21 17:05 ` [PATCH v2 01/14] mips: fix comment about pgtable_init() Mike Rapoport
@ 2023-03-22 11:36 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 11:36 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> Comment about fixrange_init() says that its called from pgtable_init()
> while the actual caller is pagetabe_init().
>
> Update comment to match the code.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> arch/mips/include/asm/fixmap.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/mips/include/asm/fixmap.h b/arch/mips/include/asm/fixmap.h
> index beea14761cef..b037718d7e8b 100644
> --- a/arch/mips/include/asm/fixmap.h
> +++ b/arch/mips/include/asm/fixmap.h
> @@ -70,7 +70,7 @@ enum fixed_addresses {
> #include <asm-generic/fixmap.h>
>
> /*
> - * Called from pgtable_init()
> + * Called from pagetable_init()
> */
> extern void fixrange_init(unsigned long start, unsigned long end,
> pgd_t *pgd_base);
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 02/14] mm/page_alloc: add helper for checking if check_pages_enabled
2023-03-21 17:05 ` [PATCH v2 02/14] mm/page_alloc: add helper for checking if check_pages_enabled Mike Rapoport
@ 2023-03-22 11:38 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 11:38 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> Instead of duplicating long static_branch_enabled(&check_pages_enabled)
> wrap it in a helper function is_check_pages_enabled()
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/page_alloc.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 87d760236dba..e1149d54d738 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -245,6 +245,11 @@ EXPORT_SYMBOL(init_on_free);
> /* perform sanity checks on struct pages being allocated or freed */
> static DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
>
> +static inline bool is_check_pages_enabled(void)
> +{
> + return static_branch_unlikely(&check_pages_enabled);
> +}
> +
> static bool _init_on_alloc_enabled_early __read_mostly
> = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON);
> static int __init early_init_on_alloc(char *buf)
> @@ -1443,7 +1448,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
> for (i = 1; i < (1 << order); i++) {
> if (compound)
> bad += free_tail_pages_check(page, page + i);
> - if (static_branch_unlikely(&check_pages_enabled)) {
> + if (is_check_pages_enabled()) {
> if (unlikely(free_page_is_bad(page + i))) {
> bad++;
> continue;
> @@ -1456,7 +1461,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
> page->mapping = NULL;
> if (memcg_kmem_online() && PageMemcgKmem(page))
> __memcg_kmem_uncharge_page(page, order);
> - if (static_branch_unlikely(&check_pages_enabled)) {
> + if (is_check_pages_enabled()) {
> if (free_page_is_bad(page))
> bad++;
> if (bad)
> @@ -2366,7 +2371,7 @@ static int check_new_page(struct page *page)
>
> static inline bool check_new_pages(struct page *page, unsigned int order)
> {
> - if (static_branch_unlikely(&check_pages_enabled)) {
> + if (is_check_pages_enabled()) {
> for (int i = 0; i < (1 << order); i++) {
> struct page *p = page + i;
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 03/14] mm: move most of core MM initialization to mm/mm_init.c
[not found] ` <20230321170513.2401534-4-rppt@kernel.org>
@ 2023-03-22 14:26 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 14:26 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> The bulk of memory management initialization code is spread all over
> mm/page_alloc.c and makes navigating through page allocator
> functionality difficult.
>
> Move most of the functions marked __init and __meminit to mm/mm_init.c
> to make it better localized and allow some more spare room before
> mm/page_alloc.c reaches 10k lines.
>
> No functional changes.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c
2023-03-21 17:05 ` [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c Mike Rapoport
@ 2023-03-22 14:49 ` Vlastimil Babka
2023-03-22 15:00 ` Mike Rapoport
0 siblings, 1 reply; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 14:49 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> The hashdist variable must be initialized before the first call to
> alloc_large_system_hash() and free_area_init() looks like a better place
> for it than page_alloc_init().
>
> Move hashdist handling to mm/mm_init.c
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Looks like this will move the fixup_hashdist() call earlier, but can't
result in seeing less N_MEMORY nodes than before, right?
I wonder if the whole thing lacks hotplug support anyway, what if system
boots with one node and more are added later? Hmm.
> ---
> mm/mm_init.c | 22 ++++++++++++++++++++++
> mm/page_alloc.c | 18 ------------------
> 2 files changed, 22 insertions(+), 18 deletions(-)
>
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 68d0187c7886..2e60c7186132 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -607,6 +607,25 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
>
> return nid;
> }
> +
> +int hashdist = HASHDIST_DEFAULT;
> +
> +static int __init set_hashdist(char *str)
> +{
> + if (!str)
> + return 0;
> + hashdist = simple_strtoul(str, &str, 0);
> + return 1;
> +}
> +__setup("hashdist=", set_hashdist);
> +
> +static inline void fixup_hashdist(void)
> +{
> + if (num_node_state(N_MEMORY) == 1)
> + hashdist = 0;
> +}
> +#else
> +static inline void fixup_hashdist(void) {}
> #endif /* CONFIG_NUMA */
>
> #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> @@ -1855,6 +1874,9 @@ void __init free_area_init(unsigned long *max_zone_pfn)
> }
>
> memmap_init();
> +
> + /* disable hash distribution for systems with a single node */
> + fixup_hashdist();
> }
>
> /**
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c56c147bdf27..ff6a2fff2880 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6383,28 +6383,10 @@ static int page_alloc_cpu_online(unsigned int cpu)
> return 0;
> }
>
> -#ifdef CONFIG_NUMA
> -int hashdist = HASHDIST_DEFAULT;
> -
> -static int __init set_hashdist(char *str)
> -{
> - if (!str)
> - return 0;
> - hashdist = simple_strtoul(str, &str, 0);
> - return 1;
> -}
> -__setup("hashdist=", set_hashdist);
> -#endif
> -
> void __init page_alloc_init(void)
> {
> int ret;
>
> -#ifdef CONFIG_NUMA
> - if (num_node_state(N_MEMORY) == 1)
> - hashdist = 0;
> -#endif
> -
> ret = cpuhp_setup_state_nocalls(CPUHP_PAGE_ALLOC,
> "mm/page_alloc:pcp",
> page_alloc_cpu_online,
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 05/14] mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp()
2023-03-21 17:05 ` [PATCH v2 05/14] mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp() Mike Rapoport
@ 2023-03-22 14:50 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 14:50 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> The page_alloc_init() name is really misleading because all this
> function does is sets up CPU hotplug callbacks for the page allocator.
>
> Rename it to page_alloc_init_cpuhp() so that name will reflect what the
> function does.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> include/linux/gfp.h | 2 +-
> init/main.c | 2 +-
> mm/page_alloc.c | 2 +-
> 3 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 7c554e4bd49f..ed8cb537c6a7 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -319,7 +319,7 @@ extern void page_frag_free(void *addr);
> #define __free_page(page) __free_pages((page), 0)
> #define free_page(addr) free_pages((addr), 0)
>
> -void page_alloc_init(void);
> +void page_alloc_init_cpuhp(void);
> void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp);
> void drain_all_pages(struct zone *zone);
> void drain_local_pages(struct zone *zone);
> diff --git a/init/main.c b/init/main.c
> index 4425d1783d5c..b2499bee7a3c 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -969,7 +969,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
> boot_cpu_hotplug_init();
>
> build_all_zonelists(NULL);
> - page_alloc_init();
> + page_alloc_init_cpuhp();
>
> pr_notice("Kernel command line: %s\n", saved_command_line);
> /* parameters may set static keys */
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ff6a2fff2880..d1276bfe7a30 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6383,7 +6383,7 @@ static int page_alloc_cpu_online(unsigned int cpu)
> return 0;
> }
>
> -void __init page_alloc_init(void)
> +void __init page_alloc_init_cpuhp(void)
> {
> int ret;
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c
2023-03-22 14:49 ` Vlastimil Babka
@ 2023-03-22 15:00 ` Mike Rapoport
0 siblings, 0 replies; 35+ messages in thread
From: Mike Rapoport @ 2023-03-22 15:00 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, David Hildenbrand, Doug Berger, Matthew Wilcox,
Mel Gorman, Michal Hocko, Thomas Bogendoerfer, linux-kernel,
linux-mips, linux-mm
On Wed, Mar 22, 2023 at 03:49:24PM +0100, Vlastimil Babka wrote:
> On 3/21/23 18:05, Mike Rapoport wrote:
> > From: "Mike Rapoport (IBM)" <rppt@kernel.org>
> >
> > The hashdist variable must be initialized before the first call to
> > alloc_large_system_hash() and free_area_init() looks like a better place
> > for it than page_alloc_init().
> >
> > Move hashdist handling to mm/mm_init.c
> >
> > Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> > Acked-by: David Hildenbrand <david@redhat.com>
>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>
> Looks like this will move the fixup_hashdist() call earlier, but can't
> result in seeing less N_MEMORY nodes than before, right?
hashdist must be set before the first call to alloc_large_system_hash() and
after the nodes present at boot time are initialized, so setting it in the
end of free_area_init() is Ok.
> I wonder if the whole thing lacks hotplug support anyway, what if system
> boots with one node and more are added later? Hmm.
alloc_large_system_hash() is called really early even for !HASH_EARLY
cases. Not sure it's feasible to redistribute the hashes allocated with it
when new node is added.
> > ---
> > mm/mm_init.c | 22 ++++++++++++++++++++++
> > mm/page_alloc.c | 18 ------------------
> > 2 files changed, 22 insertions(+), 18 deletions(-)
> >
> > diff --git a/mm/mm_init.c b/mm/mm_init.c
> > index 68d0187c7886..2e60c7186132 100644
> > --- a/mm/mm_init.c
> > +++ b/mm/mm_init.c
> > @@ -607,6 +607,25 @@ int __meminit early_pfn_to_nid(unsigned long pfn)
> >
> > return nid;
> > }
> > +
> > +int hashdist = HASHDIST_DEFAULT;
> > +
> > +static int __init set_hashdist(char *str)
> > +{
> > + if (!str)
> > + return 0;
> > + hashdist = simple_strtoul(str, &str, 0);
> > + return 1;
> > +}
> > +__setup("hashdist=", set_hashdist);
> > +
> > +static inline void fixup_hashdist(void)
> > +{
> > + if (num_node_state(N_MEMORY) == 1)
> > + hashdist = 0;
> > +}
> > +#else
> > +static inline void fixup_hashdist(void) {}
> > #endif /* CONFIG_NUMA */
> >
> > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> > @@ -1855,6 +1874,9 @@ void __init free_area_init(unsigned long *max_zone_pfn)
> > }
> >
> > memmap_init();
> > +
> > + /* disable hash distribution for systems with a single node */
> > + fixup_hashdist();
> > }
> >
> > /**
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index c56c147bdf27..ff6a2fff2880 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -6383,28 +6383,10 @@ static int page_alloc_cpu_online(unsigned int cpu)
> > return 0;
> > }
> >
> > -#ifdef CONFIG_NUMA
> > -int hashdist = HASHDIST_DEFAULT;
> > -
> > -static int __init set_hashdist(char *str)
> > -{
> > - if (!str)
> > - return 0;
> > - hashdist = simple_strtoul(str, &str, 0);
> > - return 1;
> > -}
> > -__setup("hashdist=", set_hashdist);
> > -#endif
> > -
> > void __init page_alloc_init(void)
> > {
> > int ret;
> >
> > -#ifdef CONFIG_NUMA
> > - if (num_node_state(N_MEMORY) == 1)
> > - hashdist = 0;
> > -#endif
> > -
> > ret = cpuhp_setup_state_nocalls(CPUHP_PAGE_ALLOC,
> > "mm/page_alloc:pcp",
> > page_alloc_cpu_online,
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init()
2023-03-21 17:05 ` [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init() Mike Rapoport
@ 2023-03-22 16:10 ` Vlastimil Babka
2023-03-22 20:26 ` Mike Rapoport
0 siblings, 1 reply; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:10 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> Both build_all_zonelists() and page_alloc_init_cpuhp() must be called
> after SMP setup is complete but before the page allocator is set up.
>
> Still, they both are a part of memory management initialization, so move
> them to mm_init().
Well, logic grouping is one thing, but not breaking a functional order is
more important. So this moves both calls to happen later than theyw ere. I
guess it could only matter for page_alloc_init_cpuhp() in case cpu hotplugs
would be processed in some of the calls we "skipped" over by moving this
later. And one of them is setup_arch()... so are we sure no arch does some
cpu hotplug for non-boot cpus there?
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
> init/main.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/init/main.c b/init/main.c
> index b2499bee7a3c..4423906177c1 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -833,6 +833,10 @@ static void __init report_meminit(void)
> */
> static void __init mm_init(void)
> {
> + /* Initializations relying on SMP setup */
> + build_all_zonelists(NULL);
> + page_alloc_init_cpuhp();
> +
> /*
> * page_ext requires contiguous pages,
> * bigger than MAX_ORDER unless SPARSEMEM.
> @@ -968,9 +972,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
> smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */
> boot_cpu_hotplug_init();
>
> - build_all_zonelists(NULL);
> - page_alloc_init_cpuhp();
> -
> pr_notice("Kernel command line: %s\n", saved_command_line);
> /* parameters may set static keys */
> jump_label_init();
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 07/14] init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init()
2023-03-21 17:05 ` [PATCH v2 07/14] init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init() Mike Rapoport
@ 2023-03-22 16:24 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:24 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> Make mm_init() a part of mm/ codebase. mm_core_init() better describes
> what the function does and does not clash with mm_init() in kernel/fork.c
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init()
2023-03-22 10:08 ` Mike Rapoport
2023-03-22 11:18 ` David Hildenbrand
@ 2023-03-22 16:27 ` Vlastimil Babka
1 sibling, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:27 UTC (permalink / raw)
To: Mike Rapoport, Sergei Shtylyov
Cc: Andrew Morton, David Hildenbrand, Doug Berger, Matthew Wilcox,
Mel Gorman, Michal Hocko, Thomas Bogendoerfer, linux-kernel,
linux-mips, linux-mm
On 3/22/23 11:08, Mike Rapoport wrote:
> On Wed, Mar 22, 2023 at 12:06:18PM +0300, Sergei Shtylyov wrote:
>> On 3/21/23 8:05 PM, Mike Rapoport wrote:
>>
>> > From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>> >
>> > and drop pgtable_init() as it has no real value and it's name is
>>
>> Its name.
>
> oops :)
>
> Andrew, can you replace this patch with the updated version, please?
>
> From 52420723c9bfa84aa48f666330e96f9e5b2f3248 Mon Sep 17 00:00:00 2001
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
> Date: Sat, 18 Mar 2023 13:55:28 +0200
> Subject: [PATCH v3] mm: call {ptlock,pgtable}_cache_init() directly from
> mm_core_init()
>
> and drop pgtable_init() as it has no real value and its name is
> misleading.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> include/linux/mm.h | 6 ------
> mm/mm_init.c | 3 ++-
> 2 files changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 2d7f095136fc..c3c67d8bc833 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2782,12 +2782,6 @@ static inline bool ptlock_init(struct page *page) { return true; }
> static inline void ptlock_free(struct page *page) {}
> #endif /* USE_SPLIT_PTE_PTLOCKS */
>
> -static inline void pgtable_init(void)
> -{
> - ptlock_cache_init();
> - pgtable_cache_init();
> -}
> -
> static inline bool pgtable_pte_page_ctor(struct page *page)
> {
> if (!ptlock_init(page))
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index bba73f1fb277..f1475413394d 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -2584,7 +2584,8 @@ void __init mm_core_init(void)
> */
> page_ext_init_flatmem_late();
> kmemleak_init();
> - pgtable_init();
> + ptlock_cache_init();
> + pgtable_cache_init();
> debug_objects_mem_init();
> vmalloc_init();
> /* If no deferred init page_ext now, as vmap is fully initialized */
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c
2023-03-21 17:05 ` [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c Mike Rapoport
@ 2023-03-22 16:28 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:28 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> init_mem_debugging_and_hardening() is only called from mm_core_init().
>
> Move it close to the caller, make it static and rename it to
> mem_debugging_and_hardening_init() for consistency with surrounding
> convention.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> include/linux/mm.h | 1 -
> mm/internal.h | 8 ++++
> mm/mm_init.c | 91 +++++++++++++++++++++++++++++++++++++++++++-
> mm/page_alloc.c | 95 ----------------------------------------------
> 4 files changed, 98 insertions(+), 97 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index c3c67d8bc833..2fecabb1a328 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3394,7 +3394,6 @@ extern int apply_to_existing_page_range(struct mm_struct *mm,
> unsigned long address, unsigned long size,
> pte_fn_t fn, void *data);
>
> -extern void __init init_mem_debugging_and_hardening(void);
> #ifdef CONFIG_PAGE_POISONING
> extern void __kernel_poison_pages(struct page *page, int numpages);
> extern void __kernel_unpoison_pages(struct page *page, int numpages);
> diff --git a/mm/internal.h b/mm/internal.h
> index 2a925de49393..4750e3a7fd0d 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -204,6 +204,14 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long address);
>
> extern char * const zone_names[MAX_NR_ZONES];
>
> +/* perform sanity checks on struct pages being allocated or freed */
> +DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
> +
> +static inline bool is_check_pages_enabled(void)
> +{
> + return static_branch_unlikely(&check_pages_enabled);
> +}
> +
> /*
> * Structure for holding the mostly immutable allocation parameters passed
> * between functions involved in allocations, including the alloc_pages*
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index f1475413394d..43f6d3ed24ef 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -2531,6 +2531,95 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn,
> __free_pages_core(page, order);
> }
>
> +static bool _init_on_alloc_enabled_early __read_mostly
> + = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON);
> +static int __init early_init_on_alloc(char *buf)
> +{
> +
> + return kstrtobool(buf, &_init_on_alloc_enabled_early);
> +}
> +early_param("init_on_alloc", early_init_on_alloc);
> +
> +static bool _init_on_free_enabled_early __read_mostly
> + = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON);
> +static int __init early_init_on_free(char *buf)
> +{
> + return kstrtobool(buf, &_init_on_free_enabled_early);
> +}
> +early_param("init_on_free", early_init_on_free);
> +
> +DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
> +
> +/*
> + * Enable static keys related to various memory debugging and hardening options.
> + * Some override others, and depend on early params that are evaluated in the
> + * order of appearance. So we need to first gather the full picture of what was
> + * enabled, and then make decisions.
> + */
> +static void __init mem_debugging_and_hardening_init(void)
> +{
> + bool page_poisoning_requested = false;
> + bool want_check_pages = false;
> +
> +#ifdef CONFIG_PAGE_POISONING
> + /*
> + * Page poisoning is debug page alloc for some arches. If
> + * either of those options are enabled, enable poisoning.
> + */
> + if (page_poisoning_enabled() ||
> + (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
> + debug_pagealloc_enabled())) {
> + static_branch_enable(&_page_poisoning_enabled);
> + page_poisoning_requested = true;
> + want_check_pages = true;
> + }
> +#endif
> +
> + if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) &&
> + page_poisoning_requested) {
> + pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
> + "will take precedence over init_on_alloc and init_on_free\n");
> + _init_on_alloc_enabled_early = false;
> + _init_on_free_enabled_early = false;
> + }
> +
> + if (_init_on_alloc_enabled_early) {
> + want_check_pages = true;
> + static_branch_enable(&init_on_alloc);
> + } else {
> + static_branch_disable(&init_on_alloc);
> + }
> +
> + if (_init_on_free_enabled_early) {
> + want_check_pages = true;
> + static_branch_enable(&init_on_free);
> + } else {
> + static_branch_disable(&init_on_free);
> + }
> +
> + if (IS_ENABLED(CONFIG_KMSAN) &&
> + (_init_on_alloc_enabled_early || _init_on_free_enabled_early))
> + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n");
> +
> +#ifdef CONFIG_DEBUG_PAGEALLOC
> + if (debug_pagealloc_enabled()) {
> + want_check_pages = true;
> + static_branch_enable(&_debug_pagealloc_enabled);
> +
> + if (debug_guardpage_minorder())
> + static_branch_enable(&_debug_guardpage_enabled);
> + }
> +#endif
> +
> + /*
> + * Any page debugging or hardening option also enables sanity checking
> + * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's
> + * enabled already.
> + */
> + if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages)
> + static_branch_enable(&check_pages_enabled);
> +}
> +
> /* Report memory auto-initialization states for this boot. */
> static void __init report_meminit(void)
> {
> @@ -2570,7 +2659,7 @@ void __init mm_core_init(void)
> * bigger than MAX_ORDER unless SPARSEMEM.
> */
> page_ext_init_flatmem();
> - init_mem_debugging_and_hardening();
> + mem_debugging_and_hardening_init();
> kfence_alloc_pool();
> report_meminit();
> kmsan_init_shadow();
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d1276bfe7a30..2f333c26170c 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -240,31 +240,6 @@ EXPORT_SYMBOL(init_on_alloc);
> DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_FREE_DEFAULT_ON, init_on_free);
> EXPORT_SYMBOL(init_on_free);
>
> -/* perform sanity checks on struct pages being allocated or freed */
> -static DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled);
> -
> -static inline bool is_check_pages_enabled(void)
> -{
> - return static_branch_unlikely(&check_pages_enabled);
> -}
> -
> -static bool _init_on_alloc_enabled_early __read_mostly
> - = IS_ENABLED(CONFIG_INIT_ON_ALLOC_DEFAULT_ON);
> -static int __init early_init_on_alloc(char *buf)
> -{
> -
> - return kstrtobool(buf, &_init_on_alloc_enabled_early);
> -}
> -early_param("init_on_alloc", early_init_on_alloc);
> -
> -static bool _init_on_free_enabled_early __read_mostly
> - = IS_ENABLED(CONFIG_INIT_ON_FREE_DEFAULT_ON);
> -static int __init early_init_on_free(char *buf)
> -{
> - return kstrtobool(buf, &_init_on_free_enabled_early);
> -}
> -early_param("init_on_free", early_init_on_free);
> -
> /*
> * A cached value of the page's pageblock's migratetype, used when the page is
> * put on a pcplist. Used to avoid the pageblock migratetype lookup when
> @@ -798,76 +773,6 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
> unsigned int order, int migratetype) {}
> #endif
>
> -/*
> - * Enable static keys related to various memory debugging and hardening options.
> - * Some override others, and depend on early params that are evaluated in the
> - * order of appearance. So we need to first gather the full picture of what was
> - * enabled, and then make decisions.
> - */
> -void __init init_mem_debugging_and_hardening(void)
> -{
> - bool page_poisoning_requested = false;
> - bool want_check_pages = false;
> -
> -#ifdef CONFIG_PAGE_POISONING
> - /*
> - * Page poisoning is debug page alloc for some arches. If
> - * either of those options are enabled, enable poisoning.
> - */
> - if (page_poisoning_enabled() ||
> - (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
> - debug_pagealloc_enabled())) {
> - static_branch_enable(&_page_poisoning_enabled);
> - page_poisoning_requested = true;
> - want_check_pages = true;
> - }
> -#endif
> -
> - if ((_init_on_alloc_enabled_early || _init_on_free_enabled_early) &&
> - page_poisoning_requested) {
> - pr_info("mem auto-init: CONFIG_PAGE_POISONING is on, "
> - "will take precedence over init_on_alloc and init_on_free\n");
> - _init_on_alloc_enabled_early = false;
> - _init_on_free_enabled_early = false;
> - }
> -
> - if (_init_on_alloc_enabled_early) {
> - want_check_pages = true;
> - static_branch_enable(&init_on_alloc);
> - } else {
> - static_branch_disable(&init_on_alloc);
> - }
> -
> - if (_init_on_free_enabled_early) {
> - want_check_pages = true;
> - static_branch_enable(&init_on_free);
> - } else {
> - static_branch_disable(&init_on_free);
> - }
> -
> - if (IS_ENABLED(CONFIG_KMSAN) &&
> - (_init_on_alloc_enabled_early || _init_on_free_enabled_early))
> - pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n");
> -
> -#ifdef CONFIG_DEBUG_PAGEALLOC
> - if (debug_pagealloc_enabled()) {
> - want_check_pages = true;
> - static_branch_enable(&_debug_pagealloc_enabled);
> -
> - if (debug_guardpage_minorder())
> - static_branch_enable(&_debug_guardpage_enabled);
> - }
> -#endif
> -
> - /*
> - * Any page debugging or hardening option also enables sanity checking
> - * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's
> - * enabled already.
> - */
> - if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages)
> - static_branch_enable(&check_pages_enabled);
> -}
> -
> static inline void set_buddy_order(struct page *page, unsigned int order)
> {
> set_page_private(page, order);
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 10/14] init,mm: fold late call to page_ext_init() to page_alloc_init_late()
2023-03-21 17:05 ` [PATCH v2 10/14] init,mm: fold late call to page_ext_init() to page_alloc_init_late() Mike Rapoport
@ 2023-03-22 16:30 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:30 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> When deferred initialization of struct pages is enabled, page_ext_init()
> must be called after all the deferred initialization is done, but there
> is no point to keep it a separate call from kernel_init_freeable() right
> after page_alloc_init_late().
>
> Fold the call to page_ext_init() into page_alloc_init_late() and
> localize deferred_struct_pages variable.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 11/14] mm: move mem_init_print_info() to mm_init.c
2023-03-21 17:05 ` [PATCH v2 11/14] mm: move mem_init_print_info() to mm_init.c Mike Rapoport
@ 2023-03-22 16:32 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:32 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> mem_init_print_info() is only called from mm_core_init().
>
> Move it close to the caller and make it static.
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 12/14] mm: move kmem_cache_init() declaration to mm/slab.h
2023-03-21 17:05 ` [PATCH v2 12/14] mm: move kmem_cache_init() declaration to mm/slab.h Mike Rapoport
@ 2023-03-22 16:33 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:33 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> kmem_cache_init() is called only from mm_core_init(), there is no need
> to declare it in include/linux/slab.h
>
> Move kmem_cache_init() declaration to mm/slab.h
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 13/14] mm: move vmalloc_init() declaration to mm/internal.h
2023-03-21 17:05 ` [PATCH v2 13/14] mm: move vmalloc_init() declaration to mm/internal.h Mike Rapoport
@ 2023-03-22 16:33 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:33 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> vmalloc_init() is called only from mm_core_init(), there is no need to
> declare it in include/linux/vmalloc.h
>
> Move vmalloc_init() declaration to mm/internal.h
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 14/14] MAINTAINERS: extend memblock entry to include MM initialization
2023-03-21 17:05 ` [PATCH v2 14/14] MAINTAINERS: extend memblock entry to include MM initialization Mike Rapoport
@ 2023-03-22 16:34 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-22 16:34 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: David Hildenbrand, Doug Berger, Matthew Wilcox, Mel Gorman,
Michal Hocko, Thomas Bogendoerfer, linux-kernel, linux-mips,
linux-mm
On 3/21/23 18:05, Mike Rapoport wrote:
> From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>
> and add mm/mm_init.c to memblock entry in MAINTAINERS
>
> Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
> ---
> MAINTAINERS | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7002a5d3eb62..b79463ea1049 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -13368,13 +13368,14 @@ F: arch/powerpc/include/asm/membarrier.h
> F: include/uapi/linux/membarrier.h
> F: kernel/sched/membarrier.c
>
> -MEMBLOCK
> +MEMBLOCK AND MEMORY MANAGEMENT INITIALIZATION
> M: Mike Rapoport <rppt@kernel.org>
> L: linux-mm@kvack.org
> S: Maintained
> F: Documentation/core-api/boot-time-mm.rst
> F: include/linux/memblock.h
> F: mm/memblock.c
> +F: mm/mm_init.c
> F: tools/testing/memblock/
>
> MEMORY CONTROLLER DRIVERS
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init()
2023-03-22 16:10 ` Vlastimil Babka
@ 2023-03-22 20:26 ` Mike Rapoport
2023-03-23 7:09 ` Vlastimil Babka
0 siblings, 1 reply; 35+ messages in thread
From: Mike Rapoport @ 2023-03-22 20:26 UTC (permalink / raw)
To: Vlastimil Babka
Cc: Andrew Morton, David Hildenbrand, Doug Berger, Matthew Wilcox,
Mel Gorman, Michal Hocko, Thomas Bogendoerfer, linux-kernel,
linux-mips, linux-mm
On Wed, Mar 22, 2023 at 05:10:10PM +0100, Vlastimil Babka wrote:
> On 3/21/23 18:05, Mike Rapoport wrote:
> > From: "Mike Rapoport (IBM)" <rppt@kernel.org>
> >
> > Both build_all_zonelists() and page_alloc_init_cpuhp() must be called
> > after SMP setup is complete but before the page allocator is set up.
> >
> > Still, they both are a part of memory management initialization, so move
> > them to mm_init().
>
> Well, logic grouping is one thing, but not breaking a functional order is
> more important. So this moves both calls to happen later than theyw ere. I
> guess it could only matter for page_alloc_init_cpuhp() in case cpu hotplugs
> would be processed in some of the calls we "skipped" over by moving this
> later. And one of them is setup_arch()... so are we sure no arch does some
> cpu hotplug for non-boot cpus there?
mm_init() happens after the point build_all_zonelists() and
page_alloc_init_cpuhp() were originally, so they are essentially moved
later in the init sequence and in either case called after setup_arch().
We skip the code below and it does not do neither cpu hotplug nor
non-memblock allocations.
jump_label_init();
parse_early_param();
after_dashes = parse_args("Booting kernel",
static_command_line, __start___param,
__stop___param - __start___param,
-1, -1, NULL, &unknown_bootoption);
print_unknown_bootoptions();
if (!IS_ERR_OR_NULL(after_dashes))
parse_args("Setting init args", after_dashes, NULL, 0, -1, -1,
NULL, set_init_arg);
if (extra_init_args)
parse_args("Setting extra init args", extra_init_args,
NULL, 0, -1, -1, NULL, set_init_arg);
/* Architectural and non-timekeeping rng init, before allocator init */
random_init_early(command_line);
/*
* These use large bootmem allocations and must precede
* kmem_cache_init()
*/
setup_log_buf(0);
vfs_caches_init_early();
sort_main_extable();
trap_init();
> > Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
> > Acked-by: David Hildenbrand <david@redhat.com>
> > ---
> > init/main.c | 7 ++++---
> > 1 file changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/init/main.c b/init/main.c
> > index b2499bee7a3c..4423906177c1 100644
> > --- a/init/main.c
> > +++ b/init/main.c
> > @@ -833,6 +833,10 @@ static void __init report_meminit(void)
> > */
> > static void __init mm_init(void)
> > {
> > + /* Initializations relying on SMP setup */
> > + build_all_zonelists(NULL);
> > + page_alloc_init_cpuhp();
> > +
> > /*
> > * page_ext requires contiguous pages,
> > * bigger than MAX_ORDER unless SPARSEMEM.
> > @@ -968,9 +972,6 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
> > smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */
> > boot_cpu_hotplug_init();
> >
> > - build_all_zonelists(NULL);
> > - page_alloc_init_cpuhp();
> > -
> > pr_notice("Kernel command line: %s\n", saved_command_line);
> > /* parameters may set static keys */
> > jump_label_init();
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init()
2023-03-22 20:26 ` Mike Rapoport
@ 2023-03-23 7:09 ` Vlastimil Babka
0 siblings, 0 replies; 35+ messages in thread
From: Vlastimil Babka @ 2023-03-23 7:09 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, David Hildenbrand, Doug Berger, Matthew Wilcox,
Mel Gorman, Michal Hocko, Thomas Bogendoerfer, linux-kernel,
linux-mips, linux-mm
On 3/22/23 21:26, Mike Rapoport wrote:
> On Wed, Mar 22, 2023 at 05:10:10PM +0100, Vlastimil Babka wrote:
>> On 3/21/23 18:05, Mike Rapoport wrote:
>> > From: "Mike Rapoport (IBM)" <rppt@kernel.org>
>> >
>> > Both build_all_zonelists() and page_alloc_init_cpuhp() must be called
>> > after SMP setup is complete but before the page allocator is set up.
>> >
>> > Still, they both are a part of memory management initialization, so move
>> > them to mm_init().
>>
>> Well, logic grouping is one thing, but not breaking a functional order is
>> more important. So this moves both calls to happen later than theyw ere. I
>> guess it could only matter for page_alloc_init_cpuhp() in case cpu hotplugs
>> would be processed in some of the calls we "skipped" over by moving this
>> later. And one of them is setup_arch()... so are we sure no arch does some
>> cpu hotplug for non-boot cpus there?
>
> mm_init() happens after the point build_all_zonelists() and
> page_alloc_init_cpuhp() were originally, so they are essentially moved
> later in the init sequence and in either case called after setup_arch().
Right, I looked at a wrong place in start_kernel() for the original location
of the calls, sorry for the noise.
> We skip the code below and it does not do neither cpu hotplug nor
> non-memblock allocations.
>
> jump_label_init();
> parse_early_param();
> after_dashes = parse_args("Booting kernel",
> static_command_line, __start___param,
> __stop___param - __start___param,
> -1, -1, NULL, &unknown_bootoption);
> print_unknown_bootoptions();
> if (!IS_ERR_OR_NULL(after_dashes))
> parse_args("Setting init args", after_dashes, NULL, 0, -1, -1,
> NULL, set_init_arg);
> if (extra_init_args)
> parse_args("Setting extra init args", extra_init_args,
> NULL, 0, -1, -1, NULL, set_init_arg);
>
> /* Architectural and non-timekeeping rng init, before allocator init */
> random_init_early(command_line);
>
> /*
> * These use large bootmem allocations and must precede
> * kmem_cache_init()
> */
> setup_log_buf(0);
> vfs_caches_init_early();
> sort_main_extable();
> trap_init();
>
Yeah, that looks safe.
>> > Signed-off-by: Mike Rapoport (IBM) <rppt@kernel.org>
>> > Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 35+ messages in thread
end of thread, other threads:[~2023-03-23 7:09 UTC | newest]
Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-21 17:04 [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c Mike Rapoport
2023-03-21 17:05 ` [PATCH v2 01/14] mips: fix comment about pgtable_init() Mike Rapoport
2023-03-22 11:36 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 02/14] mm/page_alloc: add helper for checking if check_pages_enabled Mike Rapoport
2023-03-22 11:38 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 04/14] mm: handle hashdist initialization in mm/mm_init.c Mike Rapoport
2023-03-22 14:49 ` Vlastimil Babka
2023-03-22 15:00 ` Mike Rapoport
2023-03-21 17:05 ` [PATCH v2 05/14] mm/page_alloc: rename page_alloc_init() to page_alloc_init_cpuhp() Mike Rapoport
2023-03-22 14:50 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 06/14] init: fold build_all_zonelists() and page_alloc_init_cpuhp() to mm_init() Mike Rapoport
2023-03-22 16:10 ` Vlastimil Babka
2023-03-22 20:26 ` Mike Rapoport
2023-03-23 7:09 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 07/14] init,mm: move mm_init() to mm/mm_init.c and rename it to mm_core_init() Mike Rapoport
2023-03-22 16:24 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 08/14] mm: call {ptlock,pgtable}_cache_init() directly from mm_core_init() Mike Rapoport
2023-03-22 9:06 ` Sergei Shtylyov
2023-03-22 10:08 ` Mike Rapoport
2023-03-22 11:18 ` David Hildenbrand
2023-03-22 16:27 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 09/14] mm: move init_mem_debugging_and_hardening() to mm/mm_init.c Mike Rapoport
2023-03-22 16:28 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 10/14] init,mm: fold late call to page_ext_init() to page_alloc_init_late() Mike Rapoport
2023-03-22 16:30 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 11/14] mm: move mem_init_print_info() to mm_init.c Mike Rapoport
2023-03-22 16:32 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 12/14] mm: move kmem_cache_init() declaration to mm/slab.h Mike Rapoport
2023-03-22 16:33 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 13/14] mm: move vmalloc_init() declaration to mm/internal.h Mike Rapoport
2023-03-22 16:33 ` Vlastimil Babka
2023-03-21 17:05 ` [PATCH v2 14/14] MAINTAINERS: extend memblock entry to include MM initialization Mike Rapoport
2023-03-22 16:34 ` Vlastimil Babka
2023-03-22 11:19 ` [PATCH v2 00/14] mm: move core MM initialization to mm/mm_init.c David Hildenbrand
[not found] ` <20230321170513.2401534-4-rppt@kernel.org>
2023-03-22 14:26 ` [PATCH v2 03/14] mm: move most of " Vlastimil Babka
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).