linux-hardening.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
@ 2024-04-09 21:02 Steven Rostedt
  2024-04-09 21:02 ` [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name Steven Rostedt
                   ` (2 more replies)
  0 siblings, 3 replies; 28+ messages in thread
From: Steven Rostedt @ 2024-04-09 21:02 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, Tony Luck,
	Guilherme G. Piccoli, linux-hardening, Guenter Roeck,
	Ross Zwisler, wklin, Vineeth Remanan Pillai, Joel Fernandes,
	Suleiman Souhlal, Linus Torvalds, Catalin Marinas, Will Deacon


Add wildcard option of reserving physical memory on kernel command line

Background:

In ChromeOS, we have 1 MB of pstore ramoops reserved so that we can extract
dmesg output and some other information when a crash happens in the field.
(This is only done when the user selects "Allow Google to collect data for
 improving the system"). But there are cases when there's a bug that
requires more data to be retrieved to figure out what is happening. We would
like to increase the pstore size, either temporarily, or maybe even
permanently. The pstore on these devices are at a fixed location in RAM (as
the RAM is not cleared on soft reboots nor crashes). The location is chosen
by the BIOS (coreboot) and passed to the kernel via ACPI tables on x86.
There's a driver that queries for this to initialize the pstore for
ChromeOS:

  See drivers/platform/chrome/chromeos_pstore.c

Problem:

The problem is that, even though there's a process to change the kernel on
these systems, and is done regularly to install updates, the firmware is
updated much less frequently. Choosing the place in RAM also takes special
care, and may be in a different address for different boards. Updating the
size via firmware is a large effort and not something that many are willing
to do for a temporary pstore size change.

Requirement:

Need a way to reserve memory that will be at a consistent location for
every boot, if the kernel and system are the same. Does not need to work
if rebooting to a different kernel, or if the system can change the
memory layout between boots.

The reserved memory can not be an hard coded address, as the same kernel /
command line needs to run on several different machines. The picked memory
reservation just needs to be the same for a given machine, but may be
different for different machines.

Solution:

The solution I have come up with is to introduce a new "memmap=" kernel
command line (for x86 and I would like something similar for ARM that uses
device tree). As "memmap=" kernel command line parameter takes on several
flavors already, I would like to introduce a new one. The "memmap=" kernel
parameter is of the format of:

  memmap=nn[Xss]

Where nn is the size, 'X' defines the flavor, and 'ss' usually a parameter
to that flavor. The '$' flavor is to reserve physical memory where you could
have:

  memmap=12M$0xb000000

Where 12 megs of memory will be reserved at the address 0xb0000000. This
memory will not be part of the memory used by the kernel's memory management
system. (e.g. alloc_pages() and kmalloc() will not return memory in that
location).

I would like to introduce a "wildcard" flavor that is of the format:

  memmap=nn*align:label

Where nn is the size of memory to reserve, the align is the alignment of
that memory, and label is the way for other sub-systems to find that memory.
This way the kernel command line could have:


  memmap=12M*4096:oops   ramoops.mem_name=oops

At boot up, the kernel will search for 12 megabytes in usable memory regions
with an alignment of 4096. It will start at the highest regions and work its
way down (for those old devices that want access to lower address DMA). When
it finds a region, it will save it off in a small table and mark it with the
"oops" label. Then the pstore ramoops sub-system could ask for that memory
and location, and it will map itself there.

This prototype allows for 8 different mappings (which may be overkill, 4 is
probably plenty) with 16 byte size to store the label. The table lookup is
only available until boot finishes, which means it is only available for
builtin code and not for modules.

I have tested this and it works for us to solve the above problem. We can
update the kernel and command line and increase the size of pstore without
needing to update the firmware, or knowing every memory layout of each
board. I only tested this locally, it has not been tested in the field. Before
doing anything, I am looking for feedback. Maybe I missed something. Perhaps
there's a better way. Anyway, this is both a Proof of Concept as well as a
Request for Comments.

Thanks!

Steven Rostedt (Google) (2):
      mm/x86: Add wildcard '*' option as memmap=nn*align:name
      pstore/ramoops: Add ramoops.mem_name= command line option

----
 arch/x86/kernel/e820.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++
 fs/pstore/ram.c        | 18 ++++++++++
 include/linux/mm.h     |  2 ++
 mm/memory.c            |  7 ++++
 4 files changed, 118 insertions(+)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-09 21:02 [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
@ 2024-04-09 21:02 ` Steven Rostedt
  2024-04-09 22:23   ` Kees Cook
  2024-04-09 21:02 ` [POC][RFC][PATCH 2/2] pstore/ramoops: Add ramoops.mem_name= command line option Steven Rostedt
  2024-04-09 21:23 ` [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
  2 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-04-09 21:02 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, Tony Luck,
	Guilherme G. Piccoli, linux-hardening, Guenter Roeck,
	Ross Zwisler, wklin, Vineeth Remanan Pillai, Joel Fernandes,
	Suleiman Souhlal, Linus Torvalds, Catalin Marinas, Will Deacon

From: "Steven Rostedt (Google)" <rostedt@goodmis.org>

In order to allow for requesting a memory region that can be used for
things like pstore on multiple machines where the memory is not the same,
add a new option to the memmap=nn$ kernel command line.

The memmap=nn$addr will reserve nn amount of memory at the physical
address addr. To use this, one must know the physical memory layout and
know where usable memory exists in the physical layout.

Add a '*' option that will assign memory by looking for a range that can
fit the given size and alignment. It will start at the high addresses, and
then work its way down.

The format is:  memmap=nn*align:name

Where it will find nn amount of memory at the given alignment of align.
The name field is to allow another subsystem to retrieve where the memory
was found. For example:

  memmap=12M*4096:oops ramoops.mem_name=oops

Where ramoops.mem_name will tell ramoops that memory was reserved for it
via the wildcard '*' option and it can find it by calling:

  if (memmap_named("oops", &start, &size)) {
	// start holds the start address and size holds the size given

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 arch/x86/kernel/e820.c | 91 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/mm.h     |  2 +
 mm/memory.c            |  7 ++++
 3 files changed, 100 insertions(+)

diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 6f1b379e3b38..a8831ef30c73 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -64,6 +64,61 @@ struct e820_table *e820_table __refdata			= &e820_table_init;
 struct e820_table *e820_table_kexec __refdata		= &e820_table_kexec_init;
 struct e820_table *e820_table_firmware __refdata	= &e820_table_firmware_init;
 
+/* For wildcard memory requests, have a table to find them later */
+#define E820_MAX_MAPS		8
+#define E820_MAP_NAME_SIZE	16
+struct e820_mmap_map {
+	char			name[E820_MAP_NAME_SIZE];
+	u64			start;
+	u64			size;
+};
+static struct e820_mmap_map e820_mmap_list[E820_MAX_MAPS] __initdata;
+static int e820_mmap_size				__initdata;
+
+/* Add wildcard region with a lookup name */
+static int __init e820_add_mmap(u64 start, u64 size, const char *name)
+{
+	struct e820_mmap_map *map;
+
+	if (!name || !name[0] || strlen(name) >= E820_MAP_NAME_SIZE)
+		return -EINVAL;
+
+	if (e820_mmap_size >= E820_MAX_MAPS)
+		return -1;
+
+	map = &e820_mmap_list[e820_mmap_size++];
+	map->start = start;
+	map->size = size;
+	strcpy(map->name, name);
+	return 0;
+}
+
+/**
+ * memmap_named - Find a wildcard region with a given name
+ * @name: The name that is attached to a wildcard region
+ * @start: If found, holds the start address
+ * @size: If found, holds the size of the address.
+ *
+ * Returns: 1 if found or 0 if not found.
+ */
+int __init memmap_named(const char *name, u64 *start, u64 *size)
+{
+	struct e820_mmap_map *map;
+	int i;
+
+	for (i = 0; i < e820_mmap_size; i++) {
+		map = &e820_mmap_list[i];
+		if (!map->size)
+			continue;
+		if (strcmp(name, map->name) == 0) {
+			*start = map->start;
+			*size = map->size;
+			return 1;
+		}
+	}
+	return 0;
+}
+
 /* For PCI or other memory-mapped resources */
 unsigned long pci_mem_start = 0xaeedbabe;
 #ifdef CONFIG_PCI
@@ -200,6 +255,29 @@ static void __init e820_print_type(enum e820_type type)
 	}
 }
 
+/*
+ * Search for usable ram that can be reserved for a wildcard.
+ * Start at the highest memory and work down to lower memory.
+ */
+static s64 e820__region(u64 size, u64 align)
+{
+	u64 start;
+	int i;
+
+	for (i = e820_table->nr_entries; i >= 0; i--) {
+		if (e820_table->entries[i].type != E820_TYPE_RAM &&
+		    e820_table->entries[i].type != E820_TYPE_RESERVED_KERN)
+			continue;
+
+		start = e820_table->entries[i].addr + e820_table->entries[i].size;
+		start -= size;
+		start = ALIGN_DOWN(start, align);
+		if (start >= e820_table->entries[i].addr)
+			return start;
+	}
+	return -1;
+}
+
 void __init e820__print_table(char *who)
 {
 	int i;
@@ -944,6 +1022,19 @@ static int __init parse_memmap_one(char *p)
 	} else if (*p == '$') {
 		start_at = memparse(p+1, &p);
 		e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
+	} else if (*p == '*') {
+		u64 align;
+		/* Followed by alignment and ':' then the name */
+		align = memparse(p+1, &p);
+		start_at = e820__region(mem_size, align);
+		if ((s64)start_at < 0)
+			return -EINVAL;
+		if (*p != ':')
+			return -EINVAL;
+		p++;
+		e820_add_mmap(start_at, mem_size, p);
+		p += strlen(p);
+		e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
 	} else if (*p == '!') {
 		start_at = memparse(p+1, &p);
 		e820__range_add(start_at, mem_size, E820_TYPE_PRAM);
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0436b919f1c7..cf9b34454c6f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4202,4 +4202,6 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+int memmap_named(const char *name, u64 *start, u64 *size);
+
 #endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index d2155ced45f8..7a29f17df7c1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -120,6 +120,13 @@ static bool vmf_orig_pte_uffd_wp(struct vm_fault *vmf)
 	return pte_marker_uffd_wp(vmf->orig_pte);
 }
 
+int __init __weak memmap_named(const char *name, u64 *start, u64 *size)
+{
+	pr_info("Kernel command line: memmap=nn*align:name not supported on this kernel");
+	/* zero means not found */
+	return 0;
+}
+
 /*
  * A number of key systems in x86 including ioremap() rely on the assumption
  * that high_memory defines the upper bound on direct map memory, then end
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [POC][RFC][PATCH 2/2] pstore/ramoops: Add ramoops.mem_name= command line option
  2024-04-09 21:02 [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
  2024-04-09 21:02 ` [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name Steven Rostedt
@ 2024-04-09 21:02 ` Steven Rostedt
  2024-04-09 22:18   ` Kees Cook
  2024-04-09 21:23 ` [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
  2 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-04-09 21:02 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, Tony Luck,
	Guilherme G. Piccoli, linux-hardening, Guenter Roeck,
	Ross Zwisler, wklin, Vineeth Remanan Pillai, Joel Fernandes,
	Suleiman Souhlal, Linus Torvalds, Catalin Marinas, Will Deacon

From: "Steven Rostedt (Google)" <rostedt@goodmis.org>

Add a method to find a region specified by memmap=nn*align:name for
ramoops. Adding a kernel command line parameter:

  memmap=12M*4096:oops ramoops.mem_name=oops

Will use the size and location defined by the memmap parameter where it
finds the memory and labels it "oops". The "oops" in the ramoops option
is used to search for it.

This allows for arbitrary RAM to be used for ramoops if it is known that
the memory is not cleared on kernel crashes or soft reboots.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
 fs/pstore/ram.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
index b1a455f42e93..c200388399fb 100644
--- a/fs/pstore/ram.c
+++ b/fs/pstore/ram.c
@@ -50,6 +50,11 @@ module_param_hw(mem_address, ullong, other, 0400);
 MODULE_PARM_DESC(mem_address,
 		"start of reserved RAM used to store oops/panic logs");
 
+static char *mem_name;
+module_param_named(mem_name, mem_name, charp, 0400);
+MODULE_PARM_DESC(mem_name,
+		"name of kernel param that holds addr (builtin only)");
+
 static ulong mem_size;
 module_param(mem_size, ulong, 0400);
 MODULE_PARM_DESC(mem_size,
@@ -914,6 +919,19 @@ static void __init ramoops_register_dummy(void)
 {
 	struct ramoops_platform_data pdata;
 
+#ifndef MODULE
+	/* Only allowed when builtin */
+	if (mem_name) {
+		u64 start;
+		u64 size;
+
+		if (memmap_named(mem_name, &start, &size)) {
+			mem_address = start;
+			mem_size = size;
+		}
+	}
+#endif
+
 	/*
 	 * Prepare a dummy platform data structure to carry the module
 	 * parameters. If mem_size isn't set, then there are no module
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 21:02 [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
  2024-04-09 21:02 ` [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name Steven Rostedt
  2024-04-09 21:02 ` [POC][RFC][PATCH 2/2] pstore/ramoops: Add ramoops.mem_name= command line option Steven Rostedt
@ 2024-04-09 21:23 ` Steven Rostedt
  2024-04-09 22:19   ` Kees Cook
  2 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-04-09 21:23 UTC (permalink / raw)
  To: linux-kernel, linux-trace-kernel
  Cc: Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, Tony Luck,
	Guilherme G. Piccoli, linux-hardening, Guenter Roeck,
	Ross Zwisler, wklin, Vineeth Remanan Pillai, Joel Fernandes,
	Suleiman Souhlal, Linus Torvalds, Catalin Marinas, Will Deacon

On Tue, 09 Apr 2024 17:02:54 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

>   memmap=12M*4096:oops   ramoops.mem_name=oops

I forgot to mention that this makes it trivial for any machine that doesn't
clear memory on soft-reboot, to enable console ramoops (to have access to
the last boot dmesg without needing serial).

I tested this on a couple of my test boxes and on QEMU, and it works rather
well.

-- Steve

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 2/2] pstore/ramoops: Add ramoops.mem_name= command line option
  2024-04-09 21:02 ` [POC][RFC][PATCH 2/2] pstore/ramoops: Add ramoops.mem_name= command line option Steven Rostedt
@ 2024-04-09 22:18   ` Kees Cook
  2024-04-09 23:14     ` Steven Rostedt
  0 siblings, 1 reply; 28+ messages in thread
From: Kees Cook @ 2024-04-09 22:18 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Tony Luck, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Tue, Apr 09, 2024 at 05:02:56PM -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
> 
> Add a method to find a region specified by memmap=nn*align:name for
> ramoops. Adding a kernel command line parameter:
> 
>   memmap=12M*4096:oops ramoops.mem_name=oops
> 
> Will use the size and location defined by the memmap parameter where it
> finds the memory and labels it "oops". The "oops" in the ramoops option
> is used to search for it.
> 
> This allows for arbitrary RAM to be used for ramoops if it is known that
> the memory is not cleared on kernel crashes or soft reboots.
> 
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
>  fs/pstore/ram.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
> index b1a455f42e93..c200388399fb 100644
> --- a/fs/pstore/ram.c
> +++ b/fs/pstore/ram.c
> @@ -50,6 +50,11 @@ module_param_hw(mem_address, ullong, other, 0400);
>  MODULE_PARM_DESC(mem_address,
>  		"start of reserved RAM used to store oops/panic logs");
>  
> +static char *mem_name;
> +module_param_named(mem_name, mem_name, charp, 0400);
> +MODULE_PARM_DESC(mem_name,
> +		"name of kernel param that holds addr (builtin only)");
> +
>  static ulong mem_size;
>  module_param(mem_size, ulong, 0400);
>  MODULE_PARM_DESC(mem_size,
> @@ -914,6 +919,19 @@ static void __init ramoops_register_dummy(void)
>  {
>  	struct ramoops_platform_data pdata;
>  
> +#ifndef MODULE
> +	/* Only allowed when builtin */

Why only when builtin?

> +	if (mem_name) {
> +		u64 start;
> +		u64 size;
> +
> +		if (memmap_named(mem_name, &start, &size)) {
> +			mem_address = start;
> +			mem_size = size;
> +		}
> +	}
> +#endif

Otherwise this looks good, though I'd prefer some comments about what's
happening here.

(And in retrospect, separately, I probably need to rename "dummy" to
"commandline" or something, since it's gathering valid settings here...)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 21:23 ` [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
@ 2024-04-09 22:19   ` Kees Cook
  2024-04-09 22:25     ` Luck, Tony
  0 siblings, 1 reply; 28+ messages in thread
From: Kees Cook @ 2024-04-09 22:19 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Tony Luck, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Tue, Apr 09, 2024 at 05:23:58PM -0400, Steven Rostedt wrote:
> On Tue, 09 Apr 2024 17:02:54 -0400
> Steven Rostedt <rostedt@goodmis.org> wrote:
> 
> >   memmap=12M*4096:oops   ramoops.mem_name=oops
> 
> I forgot to mention that this makes it trivial for any machine that doesn't
> clear memory on soft-reboot, to enable console ramoops (to have access to
> the last boot dmesg without needing serial).
> 
> I tested this on a couple of my test boxes and on QEMU, and it works rather
> well.

I've long wanted a "stable for this machine and kernel" memory region
like this for pstore. It would make testing much easier.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-09 21:02 ` [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name Steven Rostedt
@ 2024-04-09 22:23   ` Kees Cook
  2024-04-09 23:11     ` Steven Rostedt
  0 siblings, 1 reply; 28+ messages in thread
From: Kees Cook @ 2024-04-09 22:23 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Tony Luck, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Tue, Apr 09, 2024 at 05:02:55PM -0400, Steven Rostedt wrote:
> From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
> 
> In order to allow for requesting a memory region that can be used for
> things like pstore on multiple machines where the memory is not the same,
> add a new option to the memmap=nn$ kernel command line.
> 
> The memmap=nn$addr will reserve nn amount of memory at the physical
> address addr. To use this, one must know the physical memory layout and
> know where usable memory exists in the physical layout.
> 
> Add a '*' option that will assign memory by looking for a range that can
> fit the given size and alignment. It will start at the high addresses, and
> then work its way down.
> 
> The format is:  memmap=nn*align:name
> 
> Where it will find nn amount of memory at the given alignment of align.
> The name field is to allow another subsystem to retrieve where the memory
> was found. For example:
> 
>   memmap=12M*4096:oops ramoops.mem_name=oops
> 
> Where ramoops.mem_name will tell ramoops that memory was reserved for it
> via the wildcard '*' option and it can find it by calling:
> 
>   if (memmap_named("oops", &start, &size)) {
> 	// start holds the start address and size holds the size given
> 
> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
> ---
>  arch/x86/kernel/e820.c | 91 ++++++++++++++++++++++++++++++++++++++++++
>  include/linux/mm.h     |  2 +
>  mm/memory.c            |  7 ++++
>  3 files changed, 100 insertions(+)

Do we need to involve e820 at all? I think it might be possible to just
have pstore call request_mem_region() very early? Or does KASLR make
that unstable?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 22:19   ` Kees Cook
@ 2024-04-09 22:25     ` Luck, Tony
  2024-04-09 22:41       ` Joel Fernandes
                         ` (3 more replies)
  0 siblings, 4 replies; 28+ messages in thread
From: Luck, Tony @ 2024-04-09 22:25 UTC (permalink / raw)
  To: Kees Cook, Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

>> I forgot to mention that this makes it trivial for any machine that doesn't
>> clear memory on soft-reboot, to enable console ramoops (to have access to
>> the last boot dmesg without needing serial).
>> 
>> I tested this on a couple of my test boxes and on QEMU, and it works rather
>> well.
>
> I've long wanted a "stable for this machine and kernel" memory region
> like this for pstore. It would make testing much easier.

Which systems does this work on? I'd assume that servers (and anything
else with ECC memory) would nuke contents while resetting ECC to clean
state.

-Tony

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 22:25     ` Luck, Tony
@ 2024-04-09 22:41       ` Joel Fernandes
  2024-04-09 23:16       ` Steven Rostedt
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 28+ messages in thread
From: Joel Fernandes @ 2024-04-09 22:41 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Kees Cook, Steven Rostedt, linux-kernel, linux-trace-kernel,
	Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Guilherme G. Piccoli,
	linux-hardening, Guenter Roeck, Ross Zwisler, wklin,
	Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon



> On Apr 10, 2024, at 3:55 AM, Luck, Tony <tony.luck@intel.com> wrote:
> 
> 
>> 
>>> I forgot to mention that this makes it trivial for any machine that doesn't
>>> clear memory on soft-reboot, to enable console ramoops (to have access to
>>> the last boot dmesg without needing serial).
>>> 
>>> I tested this on a couple of my test boxes and on QEMU, and it works rather
>>> well.
>> 
>> I've long wanted a "stable for this machine and kernel" memory region
>> like this for pstore. It would make testing much easier.
> 
> Which systems does this work on? I'd assume that servers (and anything
> else with ECC memory) would nuke contents while resetting ECC to clean
> state.

If that were the case universally, then ramoops pstore backend would not work either?

And yet we get the last kernel logs via the pstore for many years now, on embedded-ish devices.

From my reading, ECC-enabled DRAM is not present on lots of systems and IIRC, pstore ramoops has its own ECC.

Or did I miss a recent trend with ECC-enabled DRAM?

- Joel



> 
> -Tony

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-09 22:23   ` Kees Cook
@ 2024-04-09 23:11     ` Steven Rostedt
  2024-04-09 23:41       ` Kees Cook
  0 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-04-09 23:11 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Tony Luck, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Tue, 9 Apr 2024 15:23:07 -0700
Kees Cook <keescook@chromium.org> wrote:

> Do we need to involve e820 at all? I think it might be possible to just
> have pstore call request_mem_region() very early? Or does KASLR make
> that unstable?

Yeah, would that give the same physical memory each boot, and can we
guarantee that KASLR will not map the kernel over the previous location?

-- Steve

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 2/2] pstore/ramoops: Add ramoops.mem_name= command line option
  2024-04-09 22:18   ` Kees Cook
@ 2024-04-09 23:14     ` Steven Rostedt
  0 siblings, 0 replies; 28+ messages in thread
From: Steven Rostedt @ 2024-04-09 23:14 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Tony Luck, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Tue, 9 Apr 2024 15:18:45 -0700
Kees Cook <keescook@chromium.org> wrote:

> > @@ -914,6 +919,19 @@ static void __init ramoops_register_dummy(void)
> >  {
> >  	struct ramoops_platform_data pdata;
> >  
> > +#ifndef MODULE
> > +	/* Only allowed when builtin */  
> 
> Why only when builtin?

Well, because the memory table that maps the found physical memory to a
lable is marked as __initdata, and will not be available after boot. If you
wanted it for a module, you would need some builtin code to find it.

> 
> > +	if (mem_name) {
> > +		u64 start;
> > +		u64 size;
> > +
> > +		if (memmap_named(mem_name, &start, &size)) {
> > +			mem_address = start;
> > +			mem_size = size;
> > +		}
> > +	}
> > +#endif  
> 
> Otherwise this looks good, though I'd prefer some comments about what's
> happening here.
> 
> (And in retrospect, separately, I probably need to rename "dummy" to
> "commandline" or something, since it's gathering valid settings here...)

Yeah, that was a bit confusing. I kept thinking "is this function stable?".

-- Steve

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 22:25     ` Luck, Tony
  2024-04-09 22:41       ` Joel Fernandes
@ 2024-04-09 23:16       ` Steven Rostedt
  2024-04-09 23:37       ` Kees Cook
  2024-04-11 19:11       ` Guilherme G. Piccoli
  3 siblings, 0 replies; 28+ messages in thread
From: Steven Rostedt @ 2024-04-09 23:16 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Kees Cook, linux-kernel, linux-trace-kernel, Masami Hiramatsu,
	Mark Rutland, Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Tue, 9 Apr 2024 22:25:33 +0000
"Luck, Tony" <tony.luck@intel.com> wrote:

> >> I forgot to mention that this makes it trivial for any machine that doesn't
> >> clear memory on soft-reboot, to enable console ramoops (to have access to
> >> the last boot dmesg without needing serial).
> >> 
> >> I tested this on a couple of my test boxes and on QEMU, and it works rather
> >> well.  
> >
> > I've long wanted a "stable for this machine and kernel" memory region
> > like this for pstore. It would make testing much easier.  
> 
> Which systems does this work on? I'd assume that servers (and anything
> else with ECC memory) would nuke contents while resetting ECC to clean
> state.
>

Well I tested it on a couple of chromebooks, a test box and a laptop (as
well as QEMU). I know that ramoops has an ecc option. I'm guessing that
would help here (but I'd have to defer to others to answer that).

-- Steve

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 22:25     ` Luck, Tony
  2024-04-09 22:41       ` Joel Fernandes
  2024-04-09 23:16       ` Steven Rostedt
@ 2024-04-09 23:37       ` Kees Cook
  2024-04-09 23:52         ` Luck, Tony
  2024-04-11 19:11       ` Guilherme G. Piccoli
  3 siblings, 1 reply; 28+ messages in thread
From: Kees Cook @ 2024-04-09 23:37 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel,
	Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Guilherme G. Piccoli,
	linux-hardening, Guenter Roeck, Ross Zwisler, wklin,
	Vineeth Remanan Pillai, Joel Fernandes, Suleiman Souhlal,
	Linus Torvalds, Catalin Marinas, Will Deacon

On Tue, Apr 09, 2024 at 10:25:33PM +0000, Luck, Tony wrote:
> >> I forgot to mention that this makes it trivial for any machine that doesn't
> >> clear memory on soft-reboot, to enable console ramoops (to have access to
> >> the last boot dmesg without needing serial).
> >> 
> >> I tested this on a couple of my test boxes and on QEMU, and it works rather
> >> well.
> >
> > I've long wanted a "stable for this machine and kernel" memory region
> > like this for pstore. It would make testing much easier.
> 
> Which systems does this work on? I'd assume that servers (and anything
> else with ECC memory) would nuke contents while resetting ECC to clean
> state.

Do ECC servers wipe their RAM by default? I know that if you build with
CONFIG_RESET_ATTACK_MITIGATION=y on an EFI system that supports the
MemoryOverwriteRequestControl EFI variable you'll get a RAM wipe...

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-09 23:11     ` Steven Rostedt
@ 2024-04-09 23:41       ` Kees Cook
  2024-04-12 20:59         ` Mike Rapoport
  0 siblings, 1 reply; 28+ messages in thread
From: Kees Cook @ 2024-04-09 23:41 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Tony Luck, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Tue, Apr 09, 2024 at 07:11:56PM -0400, Steven Rostedt wrote:
> On Tue, 9 Apr 2024 15:23:07 -0700
> Kees Cook <keescook@chromium.org> wrote:
> 
> > Do we need to involve e820 at all? I think it might be possible to just
> > have pstore call request_mem_region() very early? Or does KASLR make
> > that unstable?
> 
> Yeah, would that give the same physical memory each boot, and can we
> guarantee that KASLR will not map the kernel over the previous location?

Hm, no, for physical memory it needs to get excluded very early, which
means e820. So, yeah, your proposal makes sense. I'm not super excited
about this be x86-only though. What does arm64 for for memmap?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 28+ messages in thread

* RE: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 23:37       ` Kees Cook
@ 2024-04-09 23:52         ` Luck, Tony
  0 siblings, 0 replies; 28+ messages in thread
From: Luck, Tony @ 2024-04-09 23:52 UTC (permalink / raw)
  To: Kees Cook
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel,
	Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Guilherme G. Piccoli,
	linux-hardening, Guenter Roeck, Ross Zwisler, wklin,
	Vineeth Remanan Pillai, Joel Fernandes, Suleiman Souhlal,
	Linus Torvalds, Catalin Marinas, Will Deacon

> Do ECC servers wipe their RAM by default? I know that if you build with
> CONFIG_RESET_ATTACK_MITIGATION=y on an EFI system that supports the
> MemoryOverwriteRequestControl EFI variable you'll get a RAM wipe...

I know that after I've been running RAS tests that inject ECC errors into thousands of
pages, those errors all disappear after a reboot.

I think some BIOS have options to speed up boot by skipping memory initialization.
I don't know if anyone makes that the default mode.

-Tony


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-09 22:25     ` Luck, Tony
                         ` (2 preceding siblings ...)
  2024-04-09 23:37       ` Kees Cook
@ 2024-04-11 19:11       ` Guilherme G. Piccoli
  2024-04-11 19:40         ` Steven Rostedt
  3 siblings, 1 reply; 28+ messages in thread
From: Guilherme G. Piccoli @ 2024-04-11 19:11 UTC (permalink / raw)
  To: Luck, Tony, Kees Cook, Steven Rostedt, Joel Fernandes
  Cc: linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On 09/04/2024 19:25, Luck, Tony wrote:
>>> I forgot to mention that this makes it trivial for any machine that doesn't
>>> clear memory on soft-reboot, to enable console ramoops (to have access to
>>> the last boot dmesg without needing serial).
>>>
>>> I tested this on a couple of my test boxes and on QEMU, and it works rather
>>> well.
>>
>> I've long wanted a "stable for this machine and kernel" memory region
>> like this for pstore. It would make testing much easier.
> 
> Which systems does this work on? I'd assume that servers (and anything
> else with ECC memory) would nuke contents while resetting ECC to clean
> state.
> 
> -Tony

Thanks Steve! Like Kees, I've been wanting a consistent way of mapping
some RAM for pstore for a while, without resorting to platform drivers
like Chromebooks do...

The idea seems very interesting and helpful, I'll test it here. My only
concern / "complain" is that it's currently only implemented for builtin
ramoops, which is not the default in many distros (like Arch, Ubuntu,
Debian). I read patch 2 (and discussion), so I think would be good to
have that builtin helper implemented upfront to allow modular usage of
ramoops.

Now, responding to Tony: Steam Deck also uses pstore/ram to store logs,
and I've tested in my AMD desktop, it does work. Seems disabling memory
retraining in BIOS (to speedup boot?) is somewhat common, not sure for
servers though. As Joel mentioned as well, quite common to use
pstore/ram in ARM embedded world.

Cheers,


Guilherme

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-11 19:11       ` Guilherme G. Piccoli
@ 2024-04-11 19:40         ` Steven Rostedt
  2024-04-12 12:17           ` Guilherme G. Piccoli
  0 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-04-11 19:40 UTC (permalink / raw)
  To: Guilherme G. Piccoli
  Cc: Luck, Tony, Kees Cook, Joel Fernandes, linux-kernel,
	linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Thu, 11 Apr 2024 16:11:55 -0300
"Guilherme G. Piccoli" <gpiccoli@igalia.com> wrote:

> Thanks Steve! Like Kees, I've been wanting a consistent way of mapping
> some RAM for pstore for a while, without resorting to platform drivers
> like Chromebooks do...

Great!

> 
> The idea seems very interesting and helpful, I'll test it here. My only
> concern / "complain" is that it's currently only implemented for builtin
> ramoops, which is not the default in many distros (like Arch, Ubuntu,
> Debian). I read patch 2 (and discussion), so I think would be good to
> have that builtin helper implemented upfront to allow modular usage of
> ramoops.

What I think I could do is to have a check after memory is allocated to
copy the table mapping (in the heap) if it is filled. The reason I did it
this way was because it was the easiest way to save the label to address
memory before memory is initialized. I use a __initdata array (why waste
memory if it's hardly ever used).

But, after memory is initialized, we can check if the table has content,
and if so allocate a copy and store it there and use that table instead.
That would give modules a way to find the address as well.

-- Steve


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-11 19:40         ` Steven Rostedt
@ 2024-04-12 12:17           ` Guilherme G. Piccoli
  2024-04-12 17:22             ` Steven Rostedt
  0 siblings, 1 reply; 28+ messages in thread
From: Guilherme G. Piccoli @ 2024-04-12 12:17 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Luck, Tony, Kees Cook, Joel Fernandes, linux-kernel,
	linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On 11/04/2024 16:40, Steven Rostedt wrote:
> [...]
> What I think I could do is to have a check after memory is allocated to
> copy the table mapping (in the heap) if it is filled. The reason I did it
> this way was because it was the easiest way to save the label to address
> memory before memory is initialized. I use a __initdata array (why waste
> memory if it's hardly ever used).
> 
> But, after memory is initialized, we can check if the table has content,
> and if so allocate a copy and store it there and use that table instead.
> That would give modules a way to find the address as well.
> 
> -- Steve
> 

Thanks Steve, seems a good idea. With that, I could test on kdumpst (the
tool used on Steam Deck), since it relies on modular pstore/ram.

Cheers!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-12 12:17           ` Guilherme G. Piccoli
@ 2024-04-12 17:22             ` Steven Rostedt
  2024-05-01 14:45               ` Mike Rapoport
  0 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-04-12 17:22 UTC (permalink / raw)
  To: Guilherme G. Piccoli
  Cc: Luck, Tony, Kees Cook, Joel Fernandes, linux-kernel,
	linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Fri, 12 Apr 2024 09:17:18 -0300
"Guilherme G. Piccoli" <gpiccoli@igalia.com> wrote:

> Thanks Steve, seems a good idea. With that, I could test on kdumpst (the
> tool used on Steam Deck), since it relies on modular pstore/ram.

Something like this could work.

-- Steve

diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index a8831ef30c73..878aee8b2399 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -16,6 +16,7 @@
 #include <linux/firmware-map.h>
 #include <linux/sort.h>
 #include <linux/memory_hotplug.h>
+#include <linux/mm.h>
 
 #include <asm/e820/api.h>
 #include <asm/setup.h>
@@ -64,61 +65,6 @@ struct e820_table *e820_table __refdata			= &e820_table_init;
 struct e820_table *e820_table_kexec __refdata		= &e820_table_kexec_init;
 struct e820_table *e820_table_firmware __refdata	= &e820_table_firmware_init;
 
-/* For wildcard memory requests, have a table to find them later */
-#define E820_MAX_MAPS		8
-#define E820_MAP_NAME_SIZE	16
-struct e820_mmap_map {
-	char			name[E820_MAP_NAME_SIZE];
-	u64			start;
-	u64			size;
-};
-static struct e820_mmap_map e820_mmap_list[E820_MAX_MAPS] __initdata;
-static int e820_mmap_size				__initdata;
-
-/* Add wildcard region with a lookup name */
-static int __init e820_add_mmap(u64 start, u64 size, const char *name)
-{
-	struct e820_mmap_map *map;
-
-	if (!name || !name[0] || strlen(name) >= E820_MAP_NAME_SIZE)
-		return -EINVAL;
-
-	if (e820_mmap_size >= E820_MAX_MAPS)
-		return -1;
-
-	map = &e820_mmap_list[e820_mmap_size++];
-	map->start = start;
-	map->size = size;
-	strcpy(map->name, name);
-	return 0;
-}
-
-/**
- * memmap_named - Find a wildcard region with a given name
- * @name: The name that is attached to a wildcard region
- * @start: If found, holds the start address
- * @size: If found, holds the size of the address.
- *
- * Returns: 1 if found or 0 if not found.
- */
-int __init memmap_named(const char *name, u64 *start, u64 *size)
-{
-	struct e820_mmap_map *map;
-	int i;
-
-	for (i = 0; i < e820_mmap_size; i++) {
-		map = &e820_mmap_list[i];
-		if (!map->size)
-			continue;
-		if (strcmp(name, map->name) == 0) {
-			*start = map->start;
-			*size = map->size;
-			return 1;
-		}
-	}
-	return 0;
-}
-
 /* For PCI or other memory-mapped resources */
 unsigned long pci_mem_start = 0xaeedbabe;
 #ifdef CONFIG_PCI
@@ -1024,6 +970,8 @@ static int __init parse_memmap_one(char *p)
 		e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
 	} else if (*p == '*') {
 		u64 align;
+		int ret;
+
 		/* Followed by alignment and ':' then the name */
 		align = memparse(p+1, &p);
 		start_at = e820__region(mem_size, align);
@@ -1032,9 +980,10 @@ static int __init parse_memmap_one(char *p)
 		if (*p != ':')
 			return -EINVAL;
 		p++;
-		e820_add_mmap(start_at, mem_size, p);
+		ret = memmap_add(start_at, mem_size, p);
 		p += strlen(p);
-		e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
+		if (!ret)
+			e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
 	} else if (*p == '!') {
 		start_at = memparse(p+1, &p);
 		e820__range_add(start_at, mem_size, E820_TYPE_PRAM);
diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
index c200388399fb..22d2e2731dc2 100644
--- a/fs/pstore/ram.c
+++ b/fs/pstore/ram.c
@@ -919,7 +919,6 @@ static void __init ramoops_register_dummy(void)
 {
 	struct ramoops_platform_data pdata;
 
-#ifndef MODULE
 	/* Only allowed when builtin */
 	if (mem_name) {
 		u64 start;
@@ -930,7 +929,6 @@ static void __init ramoops_register_dummy(void)
 			mem_size = size;
 		}
 	}
-#endif
 
 	/*
 	 * Prepare a dummy platform data structure to carry the module
diff --git a/include/linux/mm.h b/include/linux/mm.h
index cf9b34454c6f..6ce1c6929d1f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -4203,5 +4203,6 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 }
 
 int memmap_named(const char *name, u64 *start, u64 *size);
+int memmap_add(long start, long size, const char *name);
 
 #endif /* _LINUX_MM_H */
diff --git a/mm/memory.c b/mm/memory.c
index 7a29f17df7c1..fe054e1bb678 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -120,12 +120,6 @@ static bool vmf_orig_pte_uffd_wp(struct vm_fault *vmf)
 	return pte_marker_uffd_wp(vmf->orig_pte);
 }
 
-int __init __weak memmap_named(const char *name, u64 *start, u64 *size)
-{
-	pr_info("Kernel command line: memmap=nn*align:name not supported on this kernel");
-	/* zero means not found */
-	return 0;
-}
 
 /*
  * A number of key systems in x86 including ioremap() rely on the assumption
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 549e76af8f82..e5b729b83fdc 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -154,6 +154,77 @@ static __init int set_mminit_loglevel(char *str)
 early_param("mminit_loglevel", set_mminit_loglevel);
 #endif /* CONFIG_DEBUG_MEMORY_INIT */
 
+/* For wildcard memory requests, have a table to find them later */
+#define MAX_MAPS		8
+#define MAP_NAME_SIZE	16
+struct mmap_map {
+	char			name[MAP_NAME_SIZE];
+	long			start;
+	long			size;
+};
+static struct mmap_map early_mmap_list[MAX_MAPS] __initdata;
+static int early_mmap_size			__initdata;
+static struct mmap_map *mmap_list;
+
+/* Add wildcard region with a lookup name */
+int memmap_add(long start, long size, const char *name)
+{
+	struct mmap_map *map;
+
+	if (!name || !name[0] || strlen(name) >= MAP_NAME_SIZE)
+		return -EINVAL;
+
+	if (early_mmap_size >= MAX_MAPS)
+		return -1;
+
+	map = &early_mmap_list[early_mmap_size++];
+	map->start = start;
+	map->size = size;
+	strcpy(map->name, name);
+	return 0;
+}
+
+static void __init memmap_copy(void)
+{
+	if (!early_mmap_size)
+		return;
+
+	mmap_list = kcalloc(early_mmap_size + 1, sizeof(mmap_list), GFP_KERNEL);
+	if (!mmap_list)
+		return;
+
+	for (int i = 0; i < early_mmap_size; i++)
+		mmap_list[i] = early_mmap_list[i];
+}
+
+/**
+ * memmap_named - Find a wildcard region with a given name
+ * @name: The name that is attached to a wildcard region
+ * @start: If found, holds the start address
+ * @size: If found, holds the size of the address.
+ *
+ * Returns: 1 if found or 0 if not found.
+ */
+int memmap_named(const char *name, u64 *start, u64 *size)
+{
+	struct mmap_map *map;
+
+	if (!mmap_list)
+		return 0;
+
+	for (int i = 0; mmap_list[i].name[0]; i++) {
+		map = &mmap_list[i];
+		if (!map->size)
+			continue;
+		if (strcmp(name, map->name) == 0) {
+			*start = map->start;
+			*size = map->size;
+			return 1;
+		}
+	}
+	return 0;
+}
+
 struct kobject *mm_kobj;
 
 #ifdef CONFIG_SMP
@@ -2793,4 +2864,5 @@ void __init mm_core_init(void)
 	pti_init();
 	kmsan_init_runtime();
 	mm_cache_init();
+	memmap_copy();
 }

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-09 23:41       ` Kees Cook
@ 2024-04-12 20:59         ` Mike Rapoport
  2024-04-12 22:19           ` Steven Rostedt
  0 siblings, 1 reply; 28+ messages in thread
From: Mike Rapoport @ 2024-04-12 20:59 UTC (permalink / raw)
  To: Kees Cook
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel,
	Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Tony Luck, Guilherme G. Piccoli,
	linux-hardening, Guenter Roeck, Ross Zwisler, wklin,
	Vineeth Remanan Pillai, Joel Fernandes, Suleiman Souhlal,
	Linus Torvalds, Catalin Marinas, Will Deacon

On Tue, Apr 09, 2024 at 04:41:24PM -0700, Kees Cook wrote:
> On Tue, Apr 09, 2024 at 07:11:56PM -0400, Steven Rostedt wrote:
> > On Tue, 9 Apr 2024 15:23:07 -0700
> > Kees Cook <keescook@chromium.org> wrote:
> > 
> > > Do we need to involve e820 at all? I think it might be possible to just
> > > have pstore call request_mem_region() very early? Or does KASLR make
> > > that unstable?
> > 
> > Yeah, would that give the same physical memory each boot, and can we
> > guarantee that KASLR will not map the kernel over the previous location?
> 
> Hm, no, for physical memory it needs to get excluded very early, which
> means e820.

Whatever memory is reserved in arch/x86/kernel/e820.c, that happens after
kaslr, so to begin with, a new memmap parameter should be also added to
parse_memmap in arch/x86/boot/compressed/kaslr.c to ensure the same
physical address will be available after KASLR.

More generally, memmap= is x86 specific and a bit of a hack.
Why won't you add a new kernel parameter that will be parsed in, say, 
mm/mm_init.c and will create the mmap_map (or whatever it will be named)
and reserve that memory in memblock rather than in e820?

This still will require update to arch/x86/boot/compressed/kaslr.c of
course.

> So, yeah, your proposal makes sense. I'm not super excited
> about this be x86-only though. What does arm64 for for memmap?
> 
> -- 
> Kees Cook
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-12 20:59         ` Mike Rapoport
@ 2024-04-12 22:19           ` Steven Rostedt
  2024-04-15 17:22             ` Kees Cook
  0 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-04-12 22:19 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Kees Cook, linux-kernel, linux-trace-kernel, Masami Hiramatsu,
	Mark Rutland, Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, Tony Luck, Guilherme G. Piccoli, linux-hardening,
	Guenter Roeck, Ross Zwisler, wklin, Vineeth Remanan Pillai,
	Joel Fernandes, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Fri, 12 Apr 2024 23:59:07 +0300
Mike Rapoport <rppt@kernel.org> wrote:

> On Tue, Apr 09, 2024 at 04:41:24PM -0700, Kees Cook wrote:
> > On Tue, Apr 09, 2024 at 07:11:56PM -0400, Steven Rostedt wrote:  
> > > On Tue, 9 Apr 2024 15:23:07 -0700
> > > Kees Cook <keescook@chromium.org> wrote:
> > >   
> > > > Do we need to involve e820 at all? I think it might be possible to just
> > > > have pstore call request_mem_region() very early? Or does KASLR make
> > > > that unstable?  
> > > 
> > > Yeah, would that give the same physical memory each boot, and can we
> > > guarantee that KASLR will not map the kernel over the previous location?  
> > 
> > Hm, no, for physical memory it needs to get excluded very early, which
> > means e820.  
> 
> Whatever memory is reserved in arch/x86/kernel/e820.c, that happens after
> kaslr, so to begin with, a new memmap parameter should be also added to
> parse_memmap in arch/x86/boot/compressed/kaslr.c to ensure the same
> physical address will be available after KASLR.

But doesn't KASLR only affect virtual memory not physical memory?

This just makes sure the physical memory it finds will not be used by the
system. Then ramoops does the mapping via vmap() I believe, to get a
virtual address to access the physical address.

> 
> More generally, memmap= is x86 specific and a bit of a hack.
> Why won't you add a new kernel parameter that will be parsed in, say, 
> mm/mm_init.c and will create the mmap_map (or whatever it will be named)
> and reserve that memory in memblock rather than in e820?

Hmm, I only did this approach because I'm familiar with the memmap hack and
extended upon it. But yeah, if I can do the same thing in mm_init.c it
could possibly work for all archs. Thanks for the suggestion, I'll play
with that.

> 
> This still will require update to arch/x86/boot/compressed/kaslr.c of
> course.

Oh, is the issue if KASLR maps the kernel over this location, then we lose
it? We need to tell KASLR not to touch this location?

-- Steve

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-12 22:19           ` Steven Rostedt
@ 2024-04-15 17:22             ` Kees Cook
  2024-05-01 14:57               ` Mike Rapoport
  0 siblings, 1 reply; 28+ messages in thread
From: Kees Cook @ 2024-04-15 17:22 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Mike Rapoport, linux-kernel, linux-trace-kernel,
	Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Tony Luck, Guilherme G. Piccoli,
	linux-hardening, Guenter Roeck, Ross Zwisler, wklin,
	Vineeth Remanan Pillai, Joel Fernandes, Suleiman Souhlal,
	Linus Torvalds, Catalin Marinas, Will Deacon

On Fri, Apr 12, 2024 at 06:19:40PM -0400, Steven Rostedt wrote:
> On Fri, 12 Apr 2024 23:59:07 +0300
> Mike Rapoport <rppt@kernel.org> wrote:
> 
> > On Tue, Apr 09, 2024 at 04:41:24PM -0700, Kees Cook wrote:
> > > On Tue, Apr 09, 2024 at 07:11:56PM -0400, Steven Rostedt wrote:  
> > > > On Tue, 9 Apr 2024 15:23:07 -0700
> > > > Kees Cook <keescook@chromium.org> wrote:
> > > >   
> > > > > Do we need to involve e820 at all? I think it might be possible to just
> > > > > have pstore call request_mem_region() very early? Or does KASLR make
> > > > > that unstable?  
> > > > 
> > > > Yeah, would that give the same physical memory each boot, and can we
> > > > guarantee that KASLR will not map the kernel over the previous location?  
> > > 
> > > Hm, no, for physical memory it needs to get excluded very early, which
> > > means e820.  
> > 
> > Whatever memory is reserved in arch/x86/kernel/e820.c, that happens after
> > kaslr, so to begin with, a new memmap parameter should be also added to
> > parse_memmap in arch/x86/boot/compressed/kaslr.c to ensure the same
> > physical address will be available after KASLR.
> 
> But doesn't KASLR only affect virtual memory not physical memory?

KASLR for x86 (and other archs, like arm64) do both physical and virtual
base randomization.

> This just makes sure the physical memory it finds will not be used by the
> system. Then ramoops does the mapping via vmap() I believe, to get a
> virtual address to access the physical address.

I was assuming, since you were in the e820 code, that it was
manipulating that before KASLR chose a location. But if not, yeah, Mike
is right -- you need to make sure this is getting done before
decompress_kernel().

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-04-12 17:22             ` Steven Rostedt
@ 2024-05-01 14:45               ` Mike Rapoport
  2024-05-01 14:54                 ` Steven Rostedt
  0 siblings, 1 reply; 28+ messages in thread
From: Mike Rapoport @ 2024-05-01 14:45 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Guilherme G. Piccoli, Luck, Tony, Kees Cook, Joel Fernandes,
	linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Fri, Apr 12, 2024 at 01:22:43PM -0400, Steven Rostedt wrote:
> On Fri, 12 Apr 2024 09:17:18 -0300
> "Guilherme G. Piccoli" <gpiccoli@igalia.com> wrote:
> 
> > Thanks Steve, seems a good idea. With that, I could test on kdumpst (the
> > tool used on Steam Deck), since it relies on modular pstore/ram.
> 
> Something like this could work.
> 
> -- Steve
> 
> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
> index a8831ef30c73..878aee8b2399 100644
> --- a/arch/x86/kernel/e820.c
> +++ b/arch/x86/kernel/e820.c
> @@ -16,6 +16,7 @@
>  #include <linux/firmware-map.h>
>  #include <linux/sort.h>
>  #include <linux/memory_hotplug.h>
> +#include <linux/mm.h>
>  
>  #include <asm/e820/api.h>
>  #include <asm/setup.h>
> @@ -64,61 +65,6 @@ struct e820_table *e820_table __refdata			= &e820_table_init;
>  struct e820_table *e820_table_kexec __refdata		= &e820_table_kexec_init;
>  struct e820_table *e820_table_firmware __refdata	= &e820_table_firmware_init;
>  
> -/* For wildcard memory requests, have a table to find them later */
> -#define E820_MAX_MAPS		8
> -#define E820_MAP_NAME_SIZE	16
> -struct e820_mmap_map {
> -	char			name[E820_MAP_NAME_SIZE];
> -	u64			start;
> -	u64			size;
> -};
> -static struct e820_mmap_map e820_mmap_list[E820_MAX_MAPS] __initdata;
> -static int e820_mmap_size				__initdata;
> -
> -/* Add wildcard region with a lookup name */
> -static int __init e820_add_mmap(u64 start, u64 size, const char *name)
> -{
> -	struct e820_mmap_map *map;
> -
> -	if (!name || !name[0] || strlen(name) >= E820_MAP_NAME_SIZE)
> -		return -EINVAL;
> -
> -	if (e820_mmap_size >= E820_MAX_MAPS)
> -		return -1;
> -
> -	map = &e820_mmap_list[e820_mmap_size++];
> -	map->start = start;
> -	map->size = size;
> -	strcpy(map->name, name);
> -	return 0;
> -}
> -
> -/**
> - * memmap_named - Find a wildcard region with a given name
> - * @name: The name that is attached to a wildcard region
> - * @start: If found, holds the start address
> - * @size: If found, holds the size of the address.
> - *
> - * Returns: 1 if found or 0 if not found.
> - */
> -int __init memmap_named(const char *name, u64 *start, u64 *size)
> -{
> -	struct e820_mmap_map *map;
> -	int i;
> -
> -	for (i = 0; i < e820_mmap_size; i++) {
> -		map = &e820_mmap_list[i];
> -		if (!map->size)
> -			continue;
> -		if (strcmp(name, map->name) == 0) {
> -			*start = map->start;
> -			*size = map->size;
> -			return 1;
> -		}
> -	}
> -	return 0;
> -}
> -
>  /* For PCI or other memory-mapped resources */
>  unsigned long pci_mem_start = 0xaeedbabe;
>  #ifdef CONFIG_PCI
> @@ -1024,6 +970,8 @@ static int __init parse_memmap_one(char *p)
>  		e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
>  	} else if (*p == '*') {
>  		u64 align;
> +		int ret;
> +
>  		/* Followed by alignment and ':' then the name */
>  		align = memparse(p+1, &p);
>  		start_at = e820__region(mem_size, align);
> @@ -1032,9 +980,10 @@ static int __init parse_memmap_one(char *p)
>  		if (*p != ':')
>  			return -EINVAL;
>  		p++;
> -		e820_add_mmap(start_at, mem_size, p);
> +		ret = memmap_add(start_at, mem_size, p);
>  		p += strlen(p);
> -		e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
> +		if (!ret)
> +			e820__range_add(start_at, mem_size, E820_TYPE_RESERVED);
>  	} else if (*p == '!') {
>  		start_at = memparse(p+1, &p);
>  		e820__range_add(start_at, mem_size, E820_TYPE_PRAM);
> diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
> index c200388399fb..22d2e2731dc2 100644
> --- a/fs/pstore/ram.c
> +++ b/fs/pstore/ram.c
> @@ -919,7 +919,6 @@ static void __init ramoops_register_dummy(void)
>  {
>  	struct ramoops_platform_data pdata;
>  
> -#ifndef MODULE
>  	/* Only allowed when builtin */
>  	if (mem_name) {
>  		u64 start;
> @@ -930,7 +929,6 @@ static void __init ramoops_register_dummy(void)
>  			mem_size = size;
>  		}
>  	}
> -#endif
>  
>  	/*
>  	 * Prepare a dummy platform data structure to carry the module
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index cf9b34454c6f..6ce1c6929d1f 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -4203,5 +4203,6 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
>  }
>  
>  int memmap_named(const char *name, u64 *start, u64 *size);
> +int memmap_add(long start, long size, const char *name);
>  
>  #endif /* _LINUX_MM_H */
> diff --git a/mm/memory.c b/mm/memory.c
> index 7a29f17df7c1..fe054e1bb678 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -120,12 +120,6 @@ static bool vmf_orig_pte_uffd_wp(struct vm_fault *vmf)
>  	return pte_marker_uffd_wp(vmf->orig_pte);
>  }
>  
> -int __init __weak memmap_named(const char *name, u64 *start, u64 *size)
> -{
> -	pr_info("Kernel command line: memmap=nn*align:name not supported on this kernel");
> -	/* zero means not found */
> -	return 0;
> -}
>  
>  /*
>   * A number of key systems in x86 including ioremap() rely on the assumption
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 549e76af8f82..e5b729b83fdc 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -154,6 +154,77 @@ static __init int set_mminit_loglevel(char *str)
>  early_param("mminit_loglevel", set_mminit_loglevel);
>  #endif /* CONFIG_DEBUG_MEMORY_INIT */
>  
> +/* For wildcard memory requests, have a table to find them later */
> +#define MAX_MAPS		8
> +#define MAP_NAME_SIZE	16
> +struct mmap_map {
> +	char			name[MAP_NAME_SIZE];
> +	long			start;
> +	long			size;
> +};
> +static struct mmap_map early_mmap_list[MAX_MAPS] __initdata;
> +static int early_mmap_size			__initdata;
> +static struct mmap_map *mmap_list;
> +
> +/* Add wildcard region with a lookup name */
> +int memmap_add(long start, long size, const char *name)
> +{
> +	struct mmap_map *map;
> +
> +	if (!name || !name[0] || strlen(name) >= MAP_NAME_SIZE)
> +		return -EINVAL;
> +
> +	if (early_mmap_size >= MAX_MAPS)
> +		return -1;
> +
> +	map = &early_mmap_list[early_mmap_size++];
> +	map->start = start;
> +	map->size = size;
> +	strcpy(map->name, name);
> +	return 0;
> +}
> +
> +static void __init memmap_copy(void)
> +{
> +	if (!early_mmap_size)
> +		return;
> +
> +	mmap_list = kcalloc(early_mmap_size + 1, sizeof(mmap_list), GFP_KERNEL);

We can keep early_mmap_size after boot and then we don't need to allocate
an extra element in the mmap_list. No strong feeling here, though.

> +	if (!mmap_list)
> +		return;
> +
> +	for (int i = 0; i < early_mmap_size; i++)
> +		mmap_list[i] = early_mmap_list[i];
> +}

With something like this

/*
 * Parse early_reserve_mem=nn:align:name
 */
static int __init early_reserve_mem(char *p)
{
	phys_addr_t start, size, align;
	char *oldp;
	int err;

	if (!p)
		return -EINVAL;

	oldp = p;
	size = memparse(p, &p);
	if (p == oldp)
		return -EINVAL;

	if (*p != ':')
		return -EINVAL;

	align = memparse(p+1, &p);
	if (*p != ':')
		return -EINVAL;

	start = memblock_phys_alloc(size, align);
	if (!start)
		return -ENOMEM;

	p++;
	err = memmap_add(start, size, p);
	if (err) {
		memblock_phys_free(start, size);
		return err;
	}

	p += strlen(p);

	return *p == '\0' ? 0: -EINVAL;
}
__setup("early_reserve_mem=", early_reserve_mem);

you don't need to touch e820 and it will work the same for all
architectures.

We'd need a better naming, but I couldn't think of something better yet.

> +
> +/**
> + * memmap_named - Find a wildcard region with a given name
> + * @name: The name that is attached to a wildcard region
> + * @start: If found, holds the start address
> + * @size: If found, holds the size of the address.
> + *
> + * Returns: 1 if found or 0 if not found.
> + */
> +int memmap_named(const char *name, u64 *start, u64 *size)
> +{
> +	struct mmap_map *map;
> +
> +	if (!mmap_list)
> +		return 0;
> +
> +	for (int i = 0; mmap_list[i].name[0]; i++) {
> +		map = &mmap_list[i];
> +		if (!map->size)
> +			continue;
> +		if (strcmp(name, map->name) == 0) {
> +			*start = map->start;
> +			*size = map->size;
> +			return 1;
> +		}
> +	}
> +	return 0;
> +}
> +
>  struct kobject *mm_kobj;
>  
>  #ifdef CONFIG_SMP
> @@ -2793,4 +2864,5 @@ void __init mm_core_init(void)
>  	pti_init();
>  	kmsan_init_runtime();
>  	mm_cache_init();
> +	memmap_copy();
>  }

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-05-01 14:45               ` Mike Rapoport
@ 2024-05-01 14:54                 ` Steven Rostedt
  2024-05-01 15:30                   ` Mike Rapoport
  0 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-05-01 14:54 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Guilherme G. Piccoli, Luck, Tony, Kees Cook, Joel Fernandes,
	linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Wed, 1 May 2024 17:45:49 +0300
Mike Rapoport <rppt@kernel.org> wrote:

> > +static void __init memmap_copy(void)
> > +{
> > +	if (!early_mmap_size)
> > +		return;
> > +
> > +	mmap_list = kcalloc(early_mmap_size + 1, sizeof(mmap_list), GFP_KERNEL);  
> 
> We can keep early_mmap_size after boot and then we don't need to allocate
> an extra element in the mmap_list. No strong feeling here, though.
> 
> > +	if (!mmap_list)
> > +		return;
> > +
> > +	for (int i = 0; i < early_mmap_size; i++)
> > +		mmap_list[i] = early_mmap_list[i];
> > +}  
> 
> With something like this
> 
> /*
>  * Parse early_reserve_mem=nn:align:name
>  */
> static int __init early_reserve_mem(char *p)
> {
> 	phys_addr_t start, size, align;
> 	char *oldp;
> 	int err;
> 
> 	if (!p)
> 		return -EINVAL;
> 
> 	oldp = p;
> 	size = memparse(p, &p);
> 	if (p == oldp)
> 		return -EINVAL;
> 
> 	if (*p != ':')
> 		return -EINVAL;
> 
> 	align = memparse(p+1, &p);
> 	if (*p != ':')
> 		return -EINVAL;
> 
> 	start = memblock_phys_alloc(size, align);

So this will allocate the same physical location for every boot, if booting
the same kernel and having the same physical memory layout?

-- Steve


> 	if (!start)
> 		return -ENOMEM;
> 
> 	p++;
> 	err = memmap_add(start, size, p);
> 	if (err) {
> 		memblock_phys_free(start, size);
> 		return err;
> 	}
> 
> 	p += strlen(p);
> 
> 	return *p == '\0' ? 0: -EINVAL;
> }
> __setup("early_reserve_mem=", early_reserve_mem);
> 
> you don't need to touch e820 and it will work the same for all
> architectures.
> 
> We'd need a better naming, but I couldn't think of something better yet.
> 
> > +
> > +/**
> > + * memmap_named - Find a wildcard region with a given name
> > + * @name: The name that is attached to a wildcard region
> > + * @start: If found, holds the start address
> > + * @size: If found, holds the size of the address.
> > + *
> > + * Returns: 1 if found or 0 if not found.
> > + */
> > +int memmap_named(const char *name, u64 *start, u64 *size)
> > +{
> > +	struct mmap_map *map;
> > +
> > +	if (!mmap_list)
> > +		return 0;
> > +
> > +	for (int i = 0; mmap_list[i].name[0]; i++) {
> > +		map = &mmap_list[i];
> > +		if (!map->size)
> > +			continue;
> > +		if (strcmp(name, map->name) == 0) {
> > +			*start = map->start;
> > +			*size = map->size;
> > +			return 1;
> > +		}
> > +	}
> > +	return 0;
> > +}
> > +
> >  struct kobject *mm_kobj;
> >  
> >  #ifdef CONFIG_SMP
> > @@ -2793,4 +2864,5 @@ void __init mm_core_init(void)
> >  	pti_init();
> >  	kmsan_init_runtime();
> >  	mm_cache_init();
> > +	memmap_copy();
> >  }  
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name
  2024-04-15 17:22             ` Kees Cook
@ 2024-05-01 14:57               ` Mike Rapoport
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Rapoport @ 2024-05-01 14:57 UTC (permalink / raw)
  To: Kees Cook
  Cc: Steven Rostedt, linux-kernel, linux-trace-kernel,
	Masami Hiramatsu, Mark Rutland, Mathieu Desnoyers, Andrew Morton,
	Liam R. Howlett, Vlastimil Babka, Lorenzo Stoakes, linux-mm,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Tony Luck, Guilherme G. Piccoli,
	linux-hardening, Guenter Roeck, Ross Zwisler, wklin,
	Vineeth Remanan Pillai, Joel Fernandes, Suleiman Souhlal,
	Linus Torvalds, Catalin Marinas, Will Deacon

On Mon, Apr 15, 2024 at 10:22:53AM -0700, Kees Cook wrote:
> On Fri, Apr 12, 2024 at 06:19:40PM -0400, Steven Rostedt wrote:
> > On Fri, 12 Apr 2024 23:59:07 +0300
> > Mike Rapoport <rppt@kernel.org> wrote:
> > 
> > > On Tue, Apr 09, 2024 at 04:41:24PM -0700, Kees Cook wrote:
> > > > On Tue, Apr 09, 2024 at 07:11:56PM -0400, Steven Rostedt wrote:  
> > > > > On Tue, 9 Apr 2024 15:23:07 -0700
> > > > > Kees Cook <keescook@chromium.org> wrote:
> > > > >   
> > > > > > Do we need to involve e820 at all? I think it might be possible to just
> > > > > > have pstore call request_mem_region() very early? Or does KASLR make
> > > > > > that unstable?  
> > > > > 
> > > > > Yeah, would that give the same physical memory each boot, and can we
> > > > > guarantee that KASLR will not map the kernel over the previous location?  
> > > > 
> > > > Hm, no, for physical memory it needs to get excluded very early, which
> > > > means e820.  
> > > 
> > > Whatever memory is reserved in arch/x86/kernel/e820.c, that happens after
> > > kaslr, so to begin with, a new memmap parameter should be also added to
> > > parse_memmap in arch/x86/boot/compressed/kaslr.c to ensure the same
> > > physical address will be available after KASLR.
> > 
> > But doesn't KASLR only affect virtual memory not physical memory?
> 
> KASLR for x86 (and other archs, like arm64) do both physical and virtual
> base randomization.
> 
> > This just makes sure the physical memory it finds will not be used by the
> > system. Then ramoops does the mapping via vmap() I believe, to get a
> > virtual address to access the physical address.
> 
> I was assuming, since you were in the e820 code, that it was
> manipulating that before KASLR chose a location. But if not, yeah, Mike
> is right -- you need to make sure this is getting done before
> decompress_kernel().

Right now kaslr can handle up to 4 memmap regions and parse_memmap() in
arch/x86/boot/compressed/kaslr.c should be updated for a new memmap type.

But I think it's better to add a new kernel parameter as I suggested in
another email and teach mem_avoid_memmap() in kaslr.c to deal with it, as
well as with crashkernel=size@offset, btw.
 
> -- 
> Kees Cook

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-05-01 14:54                 ` Steven Rostedt
@ 2024-05-01 15:30                   ` Mike Rapoport
  2024-05-01 16:09                     ` Steven Rostedt
  0 siblings, 1 reply; 28+ messages in thread
From: Mike Rapoport @ 2024-05-01 15:30 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Guilherme G. Piccoli, Luck, Tony, Kees Cook, Joel Fernandes,
	linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Wed, May 01, 2024 at 10:54:55AM -0400, Steven Rostedt wrote:
> On Wed, 1 May 2024 17:45:49 +0300
> Mike Rapoport <rppt@kernel.org> wrote:
> 
> > > +static void __init memmap_copy(void)
> > > +{
> > > +	if (!early_mmap_size)
> > > +		return;
> > > +
> > > +	mmap_list = kcalloc(early_mmap_size + 1, sizeof(mmap_list), GFP_KERNEL);  
> > 
> > We can keep early_mmap_size after boot and then we don't need to allocate
> > an extra element in the mmap_list. No strong feeling here, though.
> > 
> > > +	if (!mmap_list)
> > > +		return;
> > > +
> > > +	for (int i = 0; i < early_mmap_size; i++)
> > > +		mmap_list[i] = early_mmap_list[i];
> > > +}  
> > 
> > With something like this
> > 
> > /*
> >  * Parse early_reserve_mem=nn:align:name
> >  */
> > static int __init early_reserve_mem(char *p)
> > {
> > 	phys_addr_t start, size, align;
> > 	char *oldp;
> > 	int err;
> > 
> > 	if (!p)
> > 		return -EINVAL;
> > 
> > 	oldp = p;
> > 	size = memparse(p, &p);
> > 	if (p == oldp)
> > 		return -EINVAL;
> > 
> > 	if (*p != ':')
> > 		return -EINVAL;
> > 
> > 	align = memparse(p+1, &p);
> > 	if (*p != ':')
> > 		return -EINVAL;
> > 
> > 	start = memblock_phys_alloc(size, align);
> 
> So this will allocate the same physical location for every boot, if booting
> the same kernel and having the same physical memory layout?

Up to kaslr that might use that location for the kernel image.
But it's the same as allocating from e820 after kaslr.

And, TBH, I don't have good ideas how to ensure the same physical location
with randomization of the physical address of the kernel image.
 
> -- Steve
> 
> 
> > 	if (!start)
> > 		return -ENOMEM;
> > 
> > 	p++;
> > 	err = memmap_add(start, size, p);
> > 	if (err) {
> > 		memblock_phys_free(start, size);
> > 		return err;
> > 	}
> > 
> > 	p += strlen(p);
> > 
> > 	return *p == '\0' ? 0: -EINVAL;
> > }
> > __setup("early_reserve_mem=", early_reserve_mem);
> > 
> > you don't need to touch e820 and it will work the same for all
> > architectures.
> > 
> > We'd need a better naming, but I couldn't think of something better yet.
> > 
> > > +
> > > +/**
> > > + * memmap_named - Find a wildcard region with a given name
> > > + * @name: The name that is attached to a wildcard region
> > > + * @start: If found, holds the start address
> > > + * @size: If found, holds the size of the address.
> > > + *
> > > + * Returns: 1 if found or 0 if not found.
> > > + */
> > > +int memmap_named(const char *name, u64 *start, u64 *size)
> > > +{
> > > +	struct mmap_map *map;
> > > +
> > > +	if (!mmap_list)
> > > +		return 0;
> > > +
> > > +	for (int i = 0; mmap_list[i].name[0]; i++) {
> > > +		map = &mmap_list[i];
> > > +		if (!map->size)
> > > +			continue;
> > > +		if (strcmp(name, map->name) == 0) {
> > > +			*start = map->start;
> > > +			*size = map->size;
> > > +			return 1;
> > > +		}
> > > +	}
> > > +	return 0;
> > > +}
> > > +
> > >  struct kobject *mm_kobj;
> > >  
> > >  #ifdef CONFIG_SMP
> > > @@ -2793,4 +2864,5 @@ void __init mm_core_init(void)
> > >  	pti_init();
> > >  	kmsan_init_runtime();
> > >  	mm_cache_init();
> > > +	memmap_copy();
> > >  }  
> > 
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-05-01 15:30                   ` Mike Rapoport
@ 2024-05-01 16:09                     ` Steven Rostedt
  2024-05-01 16:11                       ` Mike Rapoport
  0 siblings, 1 reply; 28+ messages in thread
From: Steven Rostedt @ 2024-05-01 16:09 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Guilherme G. Piccoli, Luck, Tony, Kees Cook, Joel Fernandes,
	linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Wed, 1 May 2024 18:30:40 +0300
Mike Rapoport <rppt@kernel.org> wrote:

> > > /*
> > >  * Parse early_reserve_mem=nn:align:name
> > >  */
> > > static int __init early_reserve_mem(char *p)
> > > {
> > > 	phys_addr_t start, size, align;
> > > 	char *oldp;
> > > 	int err;
> > > 
> > > 	if (!p)
> > > 		return -EINVAL;
> > > 
> > > 	oldp = p;
> > > 	size = memparse(p, &p);
> > > 	if (p == oldp)
> > > 		return -EINVAL;
> > > 
> > > 	if (*p != ':')
> > > 		return -EINVAL;
> > > 
> > > 	align = memparse(p+1, &p);
> > > 	if (*p != ':')
> > > 		return -EINVAL;
> > > 
> > > 	start = memblock_phys_alloc(size, align);  
> > 
> > So this will allocate the same physical location for every boot, if booting
> > the same kernel and having the same physical memory layout?  
> 
> Up to kaslr that might use that location for the kernel image.
> But it's the same as allocating from e820 after kaslr.
> 
> And, TBH, I don't have good ideas how to ensure the same physical location
> with randomization of the physical address of the kernel image.

I'll try it out. Looking at arch/x86/boot/compressed/kaslr.c, if I read the
code correctly, it creates up to a 100 slots to store the kernel.

The method I used was to make sure that the allocation was always done at
the top address of memory, which I think would in most cases never be
assigned by KASLR.

This looks to just grab the next available physical address, which KASLR
can most definitely mess with.

I would still like to get the highest address possible.

-- Steve

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently
  2024-05-01 16:09                     ` Steven Rostedt
@ 2024-05-01 16:11                       ` Mike Rapoport
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Rapoport @ 2024-05-01 16:11 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Guilherme G. Piccoli, Luck, Tony, Kees Cook, Joel Fernandes,
	linux-kernel, linux-trace-kernel, Masami Hiramatsu, Mark Rutland,
	Mathieu Desnoyers, Andrew Morton, Liam R. Howlett,
	Vlastimil Babka, Lorenzo Stoakes, linux-mm, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Peter Zijlstra, linux-hardening, Guenter Roeck, Ross Zwisler,
	wklin, Vineeth Remanan Pillai, Suleiman Souhlal, Linus Torvalds,
	Catalin Marinas, Will Deacon

On Wed, May 01, 2024 at 12:09:04PM -0400, Steven Rostedt wrote:
> On Wed, 1 May 2024 18:30:40 +0300
> Mike Rapoport <rppt@kernel.org> wrote:
> 
> > > > /*
> > > >  * Parse early_reserve_mem=nn:align:name
> > > >  */
> > > > static int __init early_reserve_mem(char *p)
> > > > {
> > > > 	phys_addr_t start, size, align;
> > > > 	char *oldp;
> > > > 	int err;
> > > > 
> > > > 	if (!p)
> > > > 		return -EINVAL;
> > > > 
> > > > 	oldp = p;
> > > > 	size = memparse(p, &p);
> > > > 	if (p == oldp)
> > > > 		return -EINVAL;
> > > > 
> > > > 	if (*p != ':')
> > > > 		return -EINVAL;
> > > > 
> > > > 	align = memparse(p+1, &p);
> > > > 	if (*p != ':')
> > > > 		return -EINVAL;
> > > > 
> > > > 	start = memblock_phys_alloc(size, align);  
> > > 
> > > So this will allocate the same physical location for every boot, if booting
> > > the same kernel and having the same physical memory layout?  
> > 
> > Up to kaslr that might use that location for the kernel image.
> > But it's the same as allocating from e820 after kaslr.
> > 
> > And, TBH, I don't have good ideas how to ensure the same physical location
> > with randomization of the physical address of the kernel image.
> 
> I'll try it out. Looking at arch/x86/boot/compressed/kaslr.c, if I read the
> code correctly, it creates up to a 100 slots to store the kernel.
> 
> The method I used was to make sure that the allocation was always done at
> the top address of memory, which I think would in most cases never be
> assigned by KASLR.
> 
> This looks to just grab the next available physical address, which KASLR
> can most definitely mess with.

On x86 memblock allocates from top of the memory. As this will run later
than e820, the allocation will be lower than from e820, but still close to
the top of the memory.
 
> I would still like to get the highest address possible.
> 
> -- Steve

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2024-05-01 16:13 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-09 21:02 [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
2024-04-09 21:02 ` [POC][RFC][PATCH 1/2] mm/x86: Add wildcard * option as memmap=nn*align:name Steven Rostedt
2024-04-09 22:23   ` Kees Cook
2024-04-09 23:11     ` Steven Rostedt
2024-04-09 23:41       ` Kees Cook
2024-04-12 20:59         ` Mike Rapoport
2024-04-12 22:19           ` Steven Rostedt
2024-04-15 17:22             ` Kees Cook
2024-05-01 14:57               ` Mike Rapoport
2024-04-09 21:02 ` [POC][RFC][PATCH 2/2] pstore/ramoops: Add ramoops.mem_name= command line option Steven Rostedt
2024-04-09 22:18   ` Kees Cook
2024-04-09 23:14     ` Steven Rostedt
2024-04-09 21:23 ` [POC][RFC][PATCH 0/2] pstore/mm/x86: Add wildcard memmap to map pstore consistently Steven Rostedt
2024-04-09 22:19   ` Kees Cook
2024-04-09 22:25     ` Luck, Tony
2024-04-09 22:41       ` Joel Fernandes
2024-04-09 23:16       ` Steven Rostedt
2024-04-09 23:37       ` Kees Cook
2024-04-09 23:52         ` Luck, Tony
2024-04-11 19:11       ` Guilherme G. Piccoli
2024-04-11 19:40         ` Steven Rostedt
2024-04-12 12:17           ` Guilherme G. Piccoli
2024-04-12 17:22             ` Steven Rostedt
2024-05-01 14:45               ` Mike Rapoport
2024-05-01 14:54                 ` Steven Rostedt
2024-05-01 15:30                   ` Mike Rapoport
2024-05-01 16:09                     ` Steven Rostedt
2024-05-01 16:11                       ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).