All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes
@ 2020-07-27 21:50 Arvind Sankar
  2020-07-27 21:50 ` [PATCH 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
                   ` (16 more replies)
  0 siblings, 17 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

The first 7 patches are cleanup and minor bugfixes on the x86 KASLR
code.

The last one is a bit of an RFC. The memory regions used for KASLR are
stored as 64-bit even on a 32-bit kernel. However there are still a few
local variables that are 32-bit, but should be ok as far as I can see
because they are assigned values that have been already limited to
32-bit. It does make it a little harder to verify that the code is
correct. Since KASLR cannot actually use 64-bit regions for the 32-bit
kernel, the patch reduces regions to their below-4G segment when
creating them, making the actual variables 32-bit. Alternatively, the
few local variables could be upgraded to 64-bit.

Arvind Sankar (8):
  x86/kaslr: Make command line handling safer
  x86/kaslr: Remove bogus warning and unnecessary goto
  x86/kaslr: Fix process_efi_entries comment
  x86/kaslr: Initialize mem_limit to the real maximum address
  x86/kaslr: Simplify __process_mem_region
  x86/kaslr: Simplify process_gb_huge_pages
  x86/kaslr: Clean up slot handling
  x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel

 arch/x86/boot/compressed/acpi.c  |   7 +-
 arch/x86/boot/compressed/kaslr.c | 228 ++++++++++++-------------------
 arch/x86/boot/compressed/misc.h  |  19 ++-
 3 files changed, 111 insertions(+), 143 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [PATCH 1/8] x86/kaslr: Make command line handling safer
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-27 21:50 ` [PATCH 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Handle the possibility that the command line is NULL.

Replace open-coded strlen with a function call.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d7408af55738..a4af89646094 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -268,15 +268,19 @@ static void parse_gb_huge_pages(char *param, char *val)
 static void handle_mem_options(void)
 {
 	char *args = (char *)get_cmd_line_ptr();
-	size_t len = strlen((char *)args);
+	size_t len;
 	char *tmp_cmdline;
 	char *param, *val;
 	u64 mem_size;
 
+	if (!args)
+		return;
+
 	if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
 		!strstr(args, "hugepages"))
 		return;
 
+	len = strlen(args);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
 		error("Failed to allocate space for tmp_cmdline");
@@ -399,8 +403,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 {
 	unsigned long init_size = boot_params->hdr.init_size;
 	u64 initrd_start, initrd_size;
-	u64 cmd_line, cmd_line_size;
-	char *ptr;
+	unsigned long cmd_line, cmd_line_size = 0;
 
 	/*
 	 * Avoid the region that is unsafe to overlap during
@@ -421,12 +424,10 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	/* No need to set mapping for initrd, it will be handled in VO. */
 
 	/* Avoid kernel command line. */
-	cmd_line  = (u64)boot_params->ext_cmd_line_ptr << 32;
-	cmd_line |= boot_params->hdr.cmd_line_ptr;
+	cmd_line = get_cmd_line_ptr();
 	/* Calculate size of cmd_line. */
-	ptr = (char *)(unsigned long)cmd_line;
-	for (cmd_line_size = 0; ptr[cmd_line_size++];)
-		;
+	if (cmd_line)
+		cmd_line_size = strlen((char *)cmd_line);
 	mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
 	mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
 	add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH 2/8] x86/kaslr: Remove bogus warning and unnecessary goto
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
  2020-07-27 21:50 ` [PATCH 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-27 21:50 ` [PATCH 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
                   ` (14 subsequent siblings)
  16 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Drop the warning on seeing "--" in handle_mem_options. This will trigger
whenever one of the memory options is present in the command line
together with "--", but there's no problem if that is the case.

Replace goto with break.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index a4af89646094..21cd9e07f1f6 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -295,10 +295,8 @@ static void handle_mem_options(void)
 	while (*args) {
 		args = next_arg(args, &param, &val);
 		/* Stop at -- */
-		if (!val && strcmp(param, "--") == 0) {
-			warn("Only '--' specified in cmdline");
-			goto out;
-		}
+		if (!val && strcmp(param, "--") == 0)
+			break;
 
 		if (!strcmp(param, "memmap")) {
 			mem_avoid_memmap(PARSE_MEMMAP, val);
@@ -311,7 +309,7 @@ static void handle_mem_options(void)
 				continue;
 			mem_size = memparse(p, &p);
 			if (mem_size == 0)
-				goto out;
+				break;
 
 			mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
@@ -319,7 +317,6 @@ static void handle_mem_options(void)
 		}
 	}
 
-out:
 	free(tmp_cmdline);
 	return;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH 3/8] x86/kaslr: Fix process_efi_entries comment
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
  2020-07-27 21:50 ` [PATCH 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
  2020-07-27 21:50 ` [PATCH 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-27 21:50 ` [PATCH 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
                   ` (13 subsequent siblings)
  16 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Since commit
  0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice")
process_efi_entries will return true if we have an EFI memmap, not just
if it contained EFI_MEMORY_MORE_RELIABLE regions.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 21cd9e07f1f6..207fcb7e7b71 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -741,8 +741,8 @@ static bool process_mem_region(struct mem_vector *region,
 
 #ifdef CONFIG_EFI
 /*
- * Returns true if mirror region found (and must have been processed
- * for slots adding)
+ * Returns true if we processed the EFI memmap, which we prefer over the E820
+ * table if it is available.
  */
 static bool
 process_efi_entries(unsigned long minimum, unsigned long image_size)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH 4/8] x86/kaslr: Initialize mem_limit to the real maximum address
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (2 preceding siblings ...)
  2020-07-27 21:50 ` [PATCH 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-27 21:50 ` [PATCH 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level
paging or 4PiB with 5-level paging). This is currently not enforced by
KASLR, which thus implicitly relies on physical memory being limited to
less than 64TiB.

On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by
special checks in __process_mem_region.

Initialize mem_limit to the maximum (depending on architecture), instead
of ULLONG_MAX, and make sure the command-line arguments can only
decrease it. This makes the enforcement explicit on 64-bit, and
eliminates the 32-bit specific checks to keep the kernel below 512M.

Check upfront to make sure the minimum address is below the limit before
doing any work.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 41 +++++++++++++++++---------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 207fcb7e7b71..758d78433f94 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -94,8 +94,11 @@ static unsigned long get_boot_seed(void)
 static bool memmap_too_large;
 
 
-/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
-static unsigned long long mem_limit = ULLONG_MAX;
+/*
+ * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
+ * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
+ */
+static unsigned long long mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -221,7 +224,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 		if (start == 0) {
 			/* Store the specified memory limit if size > 0 */
-			if (size > 0)
+			if (size > 0 && size < mem_limit)
 				mem_limit = size;
 
 			continue;
@@ -311,7 +314,8 @@ static void handle_mem_options(void)
 			if (mem_size == 0)
 				break;
 
-			mem_limit = mem_size;
+			if (mem_size < mem_limit)
+				mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
 			mem_avoid_memmap(PARSE_EFI, val);
 		}
@@ -322,7 +326,9 @@ static void handle_mem_options(void)
 }
 
 /*
- * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
+ * In theory, KASLR can put the kernel anywhere in the range of [16M, MAXMEM)
+ * on 64-bit, and [16M, KERNEL_IMAGE_SIZE) on 32-bit.
+ *
  * The mem_avoid array is used to store the ranges that need to be avoided
  * when KASLR searches for an appropriate random address. We must avoid any
  * regions that are unsafe to overlap with during decompression, and other
@@ -619,10 +625,6 @@ static void __process_mem_region(struct mem_vector *entry,
 	unsigned long start_orig, end;
 	struct mem_vector cur_entry;
 
-	/* On 32-bit, ignore entries entirely above our maximum. */
-	if (IS_ENABLED(CONFIG_X86_32) && entry->start >= KERNEL_IMAGE_SIZE)
-		return;
-
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
 		return;
@@ -655,11 +657,6 @@ static void __process_mem_region(struct mem_vector *entry,
 		/* Reduce size by any delta from the original address. */
 		region.size -= region.start - start_orig;
 
-		/* On 32-bit, reduce region size to fit within max size. */
-		if (IS_ENABLED(CONFIG_X86_32) &&
-		    region.start + region.size > KERNEL_IMAGE_SIZE)
-			region.size = KERNEL_IMAGE_SIZE - region.start;
-
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
 			return;
@@ -844,15 +841,16 @@ static void process_e820_entries(unsigned long minimum,
 static unsigned long find_random_phys_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
+	/* Bail out early if it's impossible to succeed. */
+	if (minimum + image_size > mem_limit)
+		return 0;
+
 	/* Check if we had too many memmaps. */
 	if (memmap_too_large) {
 		debug_putstr("Aborted memory entries scan (more than 4 memmap= args)!\n");
 		return 0;
 	}
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
-
 	if (process_efi_entries(minimum, image_size))
 		return slots_fetch_random();
 
@@ -865,8 +863,6 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
 {
 	unsigned long slots, random_addr;
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
 	/* Align image_size for easy slot calculations. */
 	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
 
@@ -913,6 +909,11 @@ void choose_random_location(unsigned long input,
 	/* Prepare to add new identity pagetables on demand. */
 	initialize_identity_maps();
 
+	if (IS_ENABLED(CONFIG_X86_32))
+		mem_limit = KERNEL_IMAGE_SIZE;
+	else
+		mem_limit = MAXMEM;
+
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init(input, input_size, *output);
 
@@ -922,6 +923,8 @@ void choose_random_location(unsigned long input,
 	 * location:
 	 */
 	min_addr = min(*output, 512UL << 20);
+	/* Make sure minimum is aligned. */
+	min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
 
 	/* Walk available memory entries to find a random address. */
 	random_addr = find_random_phys_addr(min_addr, output_size);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH 5/8] x86/kaslr: Simplify __process_mem_region
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (3 preceding siblings ...)
  2020-07-27 21:50 ` [PATCH 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-28 10:57   ` Ingo Molnar
  2020-07-27 21:50 ` [PATCH 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Get rid of unnecessary temporary variables and redundant tests in
__process_mem_region.

Fix one minor bug: in case of an overlap, the beginning of the region
should be used even if it is exactly image_size, not just strictly
larger.

Change type of minimum/image_size arguments in process_mem_region to
unsigned long. These actually can never be above 4G (even on x86_64),
and they're unsigned long in every other function except this one.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 50 ++++++++------------------------
 1 file changed, 12 insertions(+), 38 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 758d78433f94..850e131121f7 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -622,42 +622,24 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long start_orig, end;
-	struct mem_vector cur_entry;
+	unsigned long region_end;
 
-	/* Ignore entries entirely below our minimum. */
-	if (entry->start + entry->size < minimum)
-		return;
-
-	/* Ignore entries above memory limit */
-	end = min(entry->size + entry->start, mem_limit);
-	if (entry->start >= end)
-		return;
-	cur_entry.start = entry->start;
-	cur_entry.size = end - entry->start;
-
-	region.start = cur_entry.start;
-	region.size = cur_entry.size;
+	/* Clamp region to minimum and memory limit. */
+	region.start = clamp_val(entry->start, minimum, mem_limit);
+	region_end = clamp_val(entry->start + entry->size, minimum, mem_limit);
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
-		start_orig = region.start;
-
-		/* Potentially raise address to minimum location. */
-		if (region.start < minimum)
-			region.start = minimum;
-
 		/* Potentially raise address to meet alignment needs. */
 		region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
 
 		/* Did we raise the address above the passed in memory entry? */
-		if (region.start > cur_entry.start + cur_entry.size)
+		if (region.start > region_end)
 			return;
 
-		/* Reduce size by any delta from the original address. */
-		region.size -= region.start - start_orig;
+		region.size = region_end - region.start;
 
-		/* Return if region can't contain decompressed kernel */
+		/* Return if region can't contain decompressed kernel. */
 		if (region.size < image_size)
 			return;
 
@@ -668,27 +650,19 @@ static void __process_mem_region(struct mem_vector *entry,
 		}
 
 		/* Store beginning of region if holds at least image_size. */
-		if (overlap.start > region.start + image_size) {
-			struct mem_vector beginning;
-
-			beginning.start = region.start;
-			beginning.size = overlap.start - region.start;
-			process_gb_huge_pages(&beginning, image_size);
+		if (overlap.start - region.start >= image_size) {
+			region.size = overlap.start - region.start;
+			process_gb_huge_pages(&region, image_size);
 		}
 
-		/* Return if overlap extends to or past end of region. */
-		if (overlap.start + overlap.size >= region.start + region.size)
-			return;
-
 		/* Clip off the overlapping region and start over. */
-		region.size -= overlap.start - region.start + overlap.size;
 		region.start = overlap.start + overlap.size;
 	}
 }
 
 static bool process_mem_region(struct mem_vector *region,
-			       unsigned long long minimum,
-			       unsigned long long image_size)
+			       unsigned long minimum,
+			       unsigned long image_size)
 {
 	int i;
 	/*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH 6/8] x86/kaslr: Simplify process_gb_huge_pages
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (4 preceding siblings ...)
  2020-07-27 21:50 ` [PATCH 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-28 10:58   ` Ingo Molnar
  2020-07-27 21:50 ` [PATCH 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Short-circuit the whole function on 32-bit.

Replace the loop to determine the number of 1Gb pages with arithmetic.

Fix one minor bug: if the end of the region is aligned on a 1Gb
boundary, the current code will not use the last available 1Gb page due
to an off-by-one error.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 48 ++++++++++++++------------------
 1 file changed, 21 insertions(+), 27 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 850e131121f7..29ec964b1330 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -546,49 +546,43 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 static void
 process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 {
-	unsigned long addr, size = 0;
+	unsigned long pud_start, pud_end, gb_huge_pages;
 	struct mem_vector tmp;
-	int i = 0;
 
-	if (!max_gb_huge_pages) {
+	if (IS_ENABLED(CONFIG_X86_32) || !max_gb_huge_pages) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	addr = ALIGN(region->start, PUD_SIZE);
-	/* Did we raise the address above the passed in memory entry? */
-	if (addr < region->start + region->size)
-		size = region->size - (addr - region->start);
-
-	/* Check how many 1GB huge pages can be filtered out: */
-	while (size > PUD_SIZE && max_gb_huge_pages) {
-		size -= PUD_SIZE;
-		max_gb_huge_pages--;
-		i++;
-	}
+	/* Are there any 1GB pages in the region? */
+	pud_start = ALIGN(region->start, PUD_SIZE);
+	pud_end = ALIGN_DOWN(region->start + region->size, PUD_SIZE);
 
 	/* No good 1GB huge pages found: */
-	if (!i) {
+	if (pud_start >= pud_end) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	/*
-	 * Skip those 'i'*1GB good huge pages, and continue checking and
-	 * processing the remaining head or tail part of the passed region
-	 * if available.
-	 */
-
-	if (addr >= region->start + image_size) {
+	/* Check if the head part of the region is usable. */
+	if (pud_start - region->start >= image_size) {
 		tmp.start = region->start;
-		tmp.size = addr - region->start;
+		tmp.size = pud_start - region->start;
 		store_slot_info(&tmp, image_size);
 	}
 
-	size  = region->size - (addr - region->start) - i * PUD_SIZE;
-	if (size >= image_size) {
-		tmp.start = addr + i * PUD_SIZE;
-		tmp.size = size;
+	/* Skip the good 1GB pages. */
+	gb_huge_pages = (pud_end - pud_start) >> PUD_SHIFT;
+	if (gb_huge_pages > max_gb_huge_pages) {
+		pud_end = pud_start + (max_gb_huge_pages << PUD_SHIFT);
+		max_gb_huge_pages = 0;
+	} else
+		max_gb_huge_pages -= gb_huge_pages;
+
+	/* Check if the tail part of the region is usable. */
+	if (region->start + region->size - pud_end >= image_size) {
+		tmp.start = pud_end;
+		tmp.size = region->start + region->size - pud_end;
 		store_slot_info(&tmp, image_size);
 	}
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH 7/8] x86/kaslr: Clean up slot handling
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (5 preceding siblings ...)
  2020-07-27 21:50 ` [PATCH 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-28 10:59   ` Ingo Molnar
  2020-07-27 21:50 ` [PATCH 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

The number of slots and slot areas can be unsigned int, since on 64-bit,
the maximum amount of memory is 2^52, the minimum alignment is 2^21, so
the slot number cannot be greater than 2^31. The slot areas are limited
by MAX_SLOT_AREA, currently 100. Replace the type used for slot number,
which is currently a mix of int and unsigned long, with unsigned int
consistently.

Drop unnecessary check that number of slots is not zero in
store_slot_info, it's guaranteed to be at least 1 by the calculation.

Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in
find_random_virt_addr, it cannot change the result: the largest valid
slot is the largest n that satisfies

  minimum + n * CONFIG_PHYSICAL_ALIGN + image_size <= KERNEL_IMAGE_SIZE

(since minimum is already aligned) and so n is equal to

  (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN

even if image_size is not aligned to CONFIG_PHYSICAL_ALIGN.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 36 ++++++++++++--------------------
 1 file changed, 13 insertions(+), 23 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 29ec964b1330..ffbcc0444f70 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -511,16 +511,14 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 
 struct slot_area {
 	unsigned long addr;
-	int num;
+	unsigned int num;
 };
 
 #define MAX_SLOT_AREA 100
 
 static struct slot_area slot_areas[MAX_SLOT_AREA];
-
-static unsigned long slot_max;
-
-static unsigned long slot_area_index;
+static unsigned int slot_area_index;
+static unsigned int slot_max;
 
 static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 {
@@ -530,13 +528,10 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 		return;
 
 	slot_area.addr = region->start;
-	slot_area.num = (region->size - image_size) /
-			CONFIG_PHYSICAL_ALIGN + 1;
+	slot_area.num = 1 + (region->size - image_size) / CONFIG_PHYSICAL_ALIGN;
 
-	if (slot_area.num > 0) {
-		slot_areas[slot_area_index++] = slot_area;
-		slot_max += slot_area.num;
-	}
+	slot_areas[slot_area_index++] = slot_area;
+	slot_max += slot_area.num;
 }
 
 /*
@@ -589,8 +584,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 
 static unsigned long slots_fetch_random(void)
 {
-	unsigned long slot;
-	int i;
+	unsigned int slot, i;
 
 	/* Handle case of no slots stored. */
 	if (slot_max == 0)
@@ -603,7 +597,7 @@ static unsigned long slots_fetch_random(void)
 			slot -= slot_areas[i].num;
 			continue;
 		}
-		return slot_areas[i].addr + slot * CONFIG_PHYSICAL_ALIGN;
+		return slot_areas[i].addr + (unsigned long)slot * CONFIG_PHYSICAL_ALIGN;
 	}
 
 	if (i == slot_area_index)
@@ -819,28 +813,24 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
 		return 0;
 	}
 
-	if (process_efi_entries(minimum, image_size))
-		return slots_fetch_random();
+	if (!process_efi_entries(minimum, image_size))
+		process_e820_entries(minimum, image_size);
 
-	process_e820_entries(minimum, image_size);
 	return slots_fetch_random();
 }
 
 static unsigned long find_random_virt_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
-	unsigned long slots, random_addr;
-
-	/* Align image_size for easy slot calculations. */
-	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
+	unsigned int slots;
+	unsigned long random_addr;
 
 	/*
 	 * There are how many CONFIG_PHYSICAL_ALIGN-sized slots
 	 * that can hold image_size within the range of minimum to
 	 * KERNEL_IMAGE_SIZE?
 	 */
-	slots = (KERNEL_IMAGE_SIZE - minimum - image_size) /
-		 CONFIG_PHYSICAL_ALIGN + 1;
+	slots = 1 + (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN;
 
 	random_addr = kaslr_get_random_long("Virtual") % slots;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (6 preceding siblings ...)
  2020-07-27 21:50 ` [PATCH 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
@ 2020-07-27 21:50 ` Arvind Sankar
  2020-07-28 11:05   ` Ingo Molnar
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 21:50 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Commit
  f28442497b5c ("x86/boot: Fix KASLR and memmap= collision")
converted mem_vector type to use 64-bit on the 32-bit kernel as well,
based on Thomas's review [0]. However:
- the code still doesn't consistently use 64-bit types. For instance,
  mem_avoid_overlap uses 32-bit types when checking for overlaps.  This
  is actually okay, as the passed in memory range will have been clipped
  to below 4G, but it is difficult to reason about the safety of the
  code.
- KASLR on 32-bit can't use memory above 4G anyway, so it's pointless
  to keep track of ranges above 4G.

Switch the type back to unsigned long, and use a helper function to clip
regions to 4G on 32-bit, when storing mem_avoid, immovable_mem, EFI,
E820 and command-line memory regions.

[0] https://lore.kernel.org/linux-nvdimm/alpine.DEB.2.20.1701111246400.3579@nanos/

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/acpi.c  |  7 +++----
 arch/x86/boot/compressed/kaslr.c | 25 ++++++++++---------------
 arch/x86/boot/compressed/misc.h  | 19 +++++++++++++++++--
 3 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/arch/x86/boot/compressed/acpi.c b/arch/x86/boot/compressed/acpi.c
index 8bcbcee54aa1..d09e0ec5637e 100644
--- a/arch/x86/boot/compressed/acpi.c
+++ b/arch/x86/boot/compressed/acpi.c
@@ -106,7 +106,7 @@ static acpi_physical_address kexec_get_rsdp_addr(void)
 	}
 
 	/* Get systab from boot params. */
-	systab = (efi_system_table_64_t *) (ei->efi_systab | ((__u64)ei->efi_systab_hi << 32));
+	systab = (efi_system_table_64_t *) (ei->efi_systab | ((u64)ei->efi_systab_hi << 32));
 	if (!systab)
 		error("EFI system table not found in kexec boot_params.");
 
@@ -139,7 +139,7 @@ static acpi_physical_address efi_get_rsdp_addr(void)
 
 	/* Get systab from boot params. */
 #ifdef CONFIG_X86_64
-	systab = ei->efi_systab | ((__u64)ei->efi_systab_hi << 32);
+	systab = ei->efi_systab | ((u64)ei->efi_systab_hi << 32);
 #else
 	if (ei->efi_systab_hi || ei->efi_memmap_hi) {
 		debug_putstr("Error getting RSDP address: EFI system table located above 4GB.\n");
@@ -404,8 +404,7 @@ int count_immovable_mem_regions(void)
 
 			ma = (struct acpi_srat_mem_affinity *)sub_table;
 			if (!(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) && ma->length) {
-				immovable_mem[num].start = ma->base_address;
-				immovable_mem[num].size = ma->length;
+				immovable_mem[num] = mem_vector_ctor(ma->base_address, ma->length);
 				num++;
 			}
 
diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index ffbcc0444f70..8bf4ceceb8ce 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -98,7 +98,7 @@ static bool memmap_too_large;
  * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
  * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
  */
-static unsigned long long mem_limit;
+static unsigned long mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -141,8 +141,7 @@ enum parse_mode {
 };
 
 static int
-parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
-		enum parse_mode mode)
+parse_memmap(char *p, u64 *start, u64 *size, enum parse_mode mode)
 {
 	char *oldp;
 
@@ -172,7 +171,7 @@ parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
 			 */
 			*size = 0;
 		} else {
-			unsigned long long flags;
+			u64 flags;
 
 			/*
 			 * efi_fake_mem=nn@ss:attr the attr specifies
@@ -211,7 +210,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 	while (str && (i < MAX_MEMMAP_REGIONS)) {
 		int rc;
-		unsigned long long start, size;
+		u64 start, size;
 		char *k = strchr(str, ',');
 
 		if (k)
@@ -230,8 +229,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 			continue;
 		}
 
-		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].start = start;
-		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].size = size;
+		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i] = mem_vector_ctor(start, size);
 		i++;
 	}
 
@@ -422,8 +420,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	initrd_start |= boot_params->hdr.ramdisk_image;
 	initrd_size  = (u64)boot_params->ext_ramdisk_size << 32;
 	initrd_size |= boot_params->hdr.ramdisk_size;
-	mem_avoid[MEM_AVOID_INITRD].start = initrd_start;
-	mem_avoid[MEM_AVOID_INITRD].size = initrd_size;
+	mem_avoid[MEM_AVOID_INITRD] = mem_vector_ctor(initrd_start, initrd_size);
 	/* No need to set mapping for initrd, it will be handled in VO. */
 
 	/* Avoid kernel command line. */
@@ -673,7 +670,7 @@ static bool process_mem_region(struct mem_vector *region,
 	 * immovable memory and @region.
 	 */
 	for (i = 0; i < num_immovable_mem; i++) {
-		unsigned long long start, end, entry_end, region_end;
+		unsigned long start, end, entry_end, region_end;
 		struct mem_vector entry;
 
 		if (!mem_overlaps(region, &immovable_mem[i]))
@@ -728,7 +725,7 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
 	}
 	pmap =  e->efi_memmap;
 #else
-	pmap = (e->efi_memmap | ((__u64)e->efi_memmap_hi << 32));
+	pmap = (e->efi_memmap | ((u64)e->efi_memmap_hi << 32));
 #endif
 
 	nr_desc = e->efi_memmap_size / e->efi_memdesc_size;
@@ -765,8 +762,7 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
 		    !(md->attribute & EFI_MEMORY_MORE_RELIABLE))
 			continue;
 
-		region.start = md->phys_addr;
-		region.size = md->num_pages << EFI_PAGE_SHIFT;
+		region = mem_vector_ctor(md->phys_addr, md->num_pages << EFI_PAGE_SHIFT);
 		if (process_mem_region(&region, minimum, image_size))
 			break;
 	}
@@ -793,8 +789,7 @@ static void process_e820_entries(unsigned long minimum,
 		/* Skip non-RAM entries. */
 		if (entry->type != E820_TYPE_RAM)
 			continue;
-		region.start = entry->addr;
-		region.size = entry->size;
+		region = mem_vector_ctor(entry->addr, entry->size);
 		if (process_mem_region(&region, minimum, image_size))
 			break;
 	}
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 726e264410ff..fd7c49ab0f02 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -70,10 +70,25 @@ int cmdline_find_option(const char *option, char *buffer, int bufsize);
 int cmdline_find_option_bool(const char *option);
 
 struct mem_vector {
-	unsigned long long start;
-	unsigned long long size;
+	unsigned long start;
+	unsigned long size;
 };
 
+static inline
+struct mem_vector mem_vector_ctor(u64 start, u64 size)
+{
+	u64 end = start + size;
+	struct mem_vector ret;
+
+	start = min_t(u64, start, ULONG_MAX);
+	end   = min_t(u64, end,   ULONG_MAX);
+
+	ret.start = (unsigned long)start;
+	ret.size  = (unsigned long)(end - start);
+
+	return ret;
+}
+
 #if CONFIG_RANDOMIZE_BASE
 /* kaslr.c */
 void choose_random_location(unsigned long input,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (7 preceding siblings ...)
  2020-07-27 21:50 ` [PATCH 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
@ 2020-07-27 23:07 ` Arvind Sankar
  2020-07-28 11:06   ` Ingo Molnar
                     ` (22 more replies)
  2020-07-27 23:07 ` [PATCH v2 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
                   ` (7 subsequent siblings)
  16 siblings, 23 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:07 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

The first 7 patches are cleanup and minor bugfixes on the x86 KASLR
code.

The last one is a bit of an RFC. The memory regions used for KASLR are
stored as 64-bit even on a 32-bit kernel. However there are still a few
local variables that are 32-bit, but should be ok as far as I can see
because they are assigned values that have been already limited to
32-bit. It does make it a little harder to verify that the code is
correct. Since KASLR cannot actually use 64-bit regions for the 32-bit
kernel, the patch reduces regions to their below-4G segment when
creating them, making the actual variables 32-bit. Alternatively, the
few local variables could be upgraded to 64-bit.

v1->v2:
- Fix a bug in the bugfix 5/8: overlap.start can be smaller than
  region.start, so shouldn't subtract before comparing.

Arvind Sankar (8):
  x86/kaslr: Make command line handling safer
  x86/kaslr: Remove bogus warning and unnecessary goto
  x86/kaslr: Fix process_efi_entries comment
  x86/kaslr: Initialize mem_limit to the real maximum address
  x86/kaslr: Simplify __process_mem_region
  x86/kaslr: Simplify process_gb_huge_pages
  x86/kaslr: Clean up slot handling
  x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel

 arch/x86/boot/compressed/acpi.c  |   7 +-
 arch/x86/boot/compressed/kaslr.c | 228 ++++++++++++-------------------
 arch/x86/boot/compressed/misc.h  |  19 ++-
 3 files changed, 111 insertions(+), 143 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [PATCH v2 1/8] x86/kaslr: Make command line handling safer
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (8 preceding siblings ...)
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
@ 2020-07-27 23:07 ` Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
                     ` (2 more replies)
  2020-07-27 23:07 ` [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
                   ` (6 subsequent siblings)
  16 siblings, 3 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:07 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Handle the possibility that the command line is NULL.

Replace open-coded strlen with a function call.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d7408af55738..a4af89646094 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -268,15 +268,19 @@ static void parse_gb_huge_pages(char *param, char *val)
 static void handle_mem_options(void)
 {
 	char *args = (char *)get_cmd_line_ptr();
-	size_t len = strlen((char *)args);
+	size_t len;
 	char *tmp_cmdline;
 	char *param, *val;
 	u64 mem_size;
 
+	if (!args)
+		return;
+
 	if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
 		!strstr(args, "hugepages"))
 		return;
 
+	len = strlen(args);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
 		error("Failed to allocate space for tmp_cmdline");
@@ -399,8 +403,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 {
 	unsigned long init_size = boot_params->hdr.init_size;
 	u64 initrd_start, initrd_size;
-	u64 cmd_line, cmd_line_size;
-	char *ptr;
+	unsigned long cmd_line, cmd_line_size = 0;
 
 	/*
 	 * Avoid the region that is unsafe to overlap during
@@ -421,12 +424,10 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	/* No need to set mapping for initrd, it will be handled in VO. */
 
 	/* Avoid kernel command line. */
-	cmd_line  = (u64)boot_params->ext_cmd_line_ptr << 32;
-	cmd_line |= boot_params->hdr.cmd_line_ptr;
+	cmd_line = get_cmd_line_ptr();
 	/* Calculate size of cmd_line. */
-	ptr = (char *)(unsigned long)cmd_line;
-	for (cmd_line_size = 0; ptr[cmd_line_size++];)
-		;
+	if (cmd_line)
+		cmd_line_size = strlen((char *)cmd_line);
 	mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
 	mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
 	add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (9 preceding siblings ...)
  2020-07-27 23:07 ` [PATCH v2 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
@ 2020-07-27 23:07 ` Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
                     ` (2 more replies)
  2020-07-27 23:07 ` [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
                   ` (5 subsequent siblings)
  16 siblings, 3 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:07 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Drop the warning on seeing "--" in handle_mem_options. This will trigger
whenever one of the memory options is present in the command line
together with "--", but there's no problem if that is the case.

Replace goto with break.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index a4af89646094..21cd9e07f1f6 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -295,10 +295,8 @@ static void handle_mem_options(void)
 	while (*args) {
 		args = next_arg(args, &param, &val);
 		/* Stop at -- */
-		if (!val && strcmp(param, "--") == 0) {
-			warn("Only '--' specified in cmdline");
-			goto out;
-		}
+		if (!val && strcmp(param, "--") == 0)
+			break;
 
 		if (!strcmp(param, "memmap")) {
 			mem_avoid_memmap(PARSE_MEMMAP, val);
@@ -311,7 +309,7 @@ static void handle_mem_options(void)
 				continue;
 			mem_size = memparse(p, &p);
 			if (mem_size == 0)
-				goto out;
+				break;
 
 			mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
@@ -319,7 +317,6 @@ static void handle_mem_options(void)
 		}
 	}
 
-out:
 	free(tmp_cmdline);
 	return;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (10 preceding siblings ...)
  2020-07-27 23:07 ` [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
@ 2020-07-27 23:07 ` Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
                     ` (2 more replies)
  2020-07-27 23:07 ` [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
                   ` (4 subsequent siblings)
  16 siblings, 3 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:07 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Since commit
  0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice")
process_efi_entries will return true if we have an EFI memmap, not just
if it contained EFI_MEMORY_MORE_RELIABLE regions.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 21cd9e07f1f6..207fcb7e7b71 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -741,8 +741,8 @@ static bool process_mem_region(struct mem_vector *region,
 
 #ifdef CONFIG_EFI
 /*
- * Returns true if mirror region found (and must have been processed
- * for slots adding)
+ * Returns true if we processed the EFI memmap, which we prefer over the E820
+ * table if it is available.
  */
 static bool
 process_efi_entries(unsigned long minimum, unsigned long image_size)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (11 preceding siblings ...)
  2020-07-27 23:07 ` [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
@ 2020-07-27 23:07 ` Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
                     ` (2 more replies)
  2020-07-27 23:07 ` [PATCH v2 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
                   ` (3 subsequent siblings)
  16 siblings, 3 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:07 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level
paging or 4PiB with 5-level paging). This is currently not enforced by
KASLR, which thus implicitly relies on physical memory being limited to
less than 64TiB.

On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by
special checks in __process_mem_region.

Initialize mem_limit to the maximum (depending on architecture), instead
of ULLONG_MAX, and make sure the command-line arguments can only
decrease it. This makes the enforcement explicit on 64-bit, and
eliminates the 32-bit specific checks to keep the kernel below 512M.

Check upfront to make sure the minimum address is below the limit before
doing any work.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 41 +++++++++++++++++---------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 207fcb7e7b71..758d78433f94 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -94,8 +94,11 @@ static unsigned long get_boot_seed(void)
 static bool memmap_too_large;
 
 
-/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
-static unsigned long long mem_limit = ULLONG_MAX;
+/*
+ * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
+ * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
+ */
+static unsigned long long mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -221,7 +224,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 		if (start == 0) {
 			/* Store the specified memory limit if size > 0 */
-			if (size > 0)
+			if (size > 0 && size < mem_limit)
 				mem_limit = size;
 
 			continue;
@@ -311,7 +314,8 @@ static void handle_mem_options(void)
 			if (mem_size == 0)
 				break;
 
-			mem_limit = mem_size;
+			if (mem_size < mem_limit)
+				mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
 			mem_avoid_memmap(PARSE_EFI, val);
 		}
@@ -322,7 +326,9 @@ static void handle_mem_options(void)
 }
 
 /*
- * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
+ * In theory, KASLR can put the kernel anywhere in the range of [16M, MAXMEM)
+ * on 64-bit, and [16M, KERNEL_IMAGE_SIZE) on 32-bit.
+ *
  * The mem_avoid array is used to store the ranges that need to be avoided
  * when KASLR searches for an appropriate random address. We must avoid any
  * regions that are unsafe to overlap with during decompression, and other
@@ -619,10 +625,6 @@ static void __process_mem_region(struct mem_vector *entry,
 	unsigned long start_orig, end;
 	struct mem_vector cur_entry;
 
-	/* On 32-bit, ignore entries entirely above our maximum. */
-	if (IS_ENABLED(CONFIG_X86_32) && entry->start >= KERNEL_IMAGE_SIZE)
-		return;
-
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
 		return;
@@ -655,11 +657,6 @@ static void __process_mem_region(struct mem_vector *entry,
 		/* Reduce size by any delta from the original address. */
 		region.size -= region.start - start_orig;
 
-		/* On 32-bit, reduce region size to fit within max size. */
-		if (IS_ENABLED(CONFIG_X86_32) &&
-		    region.start + region.size > KERNEL_IMAGE_SIZE)
-			region.size = KERNEL_IMAGE_SIZE - region.start;
-
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
 			return;
@@ -844,15 +841,16 @@ static void process_e820_entries(unsigned long minimum,
 static unsigned long find_random_phys_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
+	/* Bail out early if it's impossible to succeed. */
+	if (minimum + image_size > mem_limit)
+		return 0;
+
 	/* Check if we had too many memmaps. */
 	if (memmap_too_large) {
 		debug_putstr("Aborted memory entries scan (more than 4 memmap= args)!\n");
 		return 0;
 	}
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
-
 	if (process_efi_entries(minimum, image_size))
 		return slots_fetch_random();
 
@@ -865,8 +863,6 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
 {
 	unsigned long slots, random_addr;
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
 	/* Align image_size for easy slot calculations. */
 	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
 
@@ -913,6 +909,11 @@ void choose_random_location(unsigned long input,
 	/* Prepare to add new identity pagetables on demand. */
 	initialize_identity_maps();
 
+	if (IS_ENABLED(CONFIG_X86_32))
+		mem_limit = KERNEL_IMAGE_SIZE;
+	else
+		mem_limit = MAXMEM;
+
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init(input, input_size, *output);
 
@@ -922,6 +923,8 @@ void choose_random_location(unsigned long input,
 	 * location:
 	 */
 	min_addr = min(*output, 512UL << 20);
+	/* Make sure minimum is aligned. */
+	min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
 
 	/* Walk available memory entries to find a random address. */
 	random_addr = find_random_phys_addr(min_addr, output_size);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 5/8] x86/kaslr: Simplify __process_mem_region
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (12 preceding siblings ...)
  2020-07-27 23:07 ` [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
@ 2020-07-27 23:07 ` Arvind Sankar
  2020-07-28 19:25   ` Kees Cook
  2020-07-27 23:07 ` [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:07 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Get rid of unnecessary temporary variables and redundant tests in
__process_mem_region.

Fix one minor bug: in case of an overlap, the beginning of the region
should be used even if it is exactly image_size, not just strictly
larger.

Change type of minimum/image_size arguments in process_mem_region to
unsigned long. These actually can never be above 4G (even on x86_64),
and they're unsigned long in every other function except this one.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 50 ++++++++------------------------
 1 file changed, 12 insertions(+), 38 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 758d78433f94..d3d8f1f6d5d3 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -622,42 +622,24 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long start_orig, end;
-	struct mem_vector cur_entry;
+	unsigned long region_end;
 
-	/* Ignore entries entirely below our minimum. */
-	if (entry->start + entry->size < minimum)
-		return;
-
-	/* Ignore entries above memory limit */
-	end = min(entry->size + entry->start, mem_limit);
-	if (entry->start >= end)
-		return;
-	cur_entry.start = entry->start;
-	cur_entry.size = end - entry->start;
-
-	region.start = cur_entry.start;
-	region.size = cur_entry.size;
+	/* Clamp region to minimum and memory limit. */
+	region.start = clamp_val(entry->start, minimum, mem_limit);
+	region_end = clamp_val(entry->start + entry->size, minimum, mem_limit);
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
-		start_orig = region.start;
-
-		/* Potentially raise address to minimum location. */
-		if (region.start < minimum)
-			region.start = minimum;
-
 		/* Potentially raise address to meet alignment needs. */
 		region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
 
 		/* Did we raise the address above the passed in memory entry? */
-		if (region.start > cur_entry.start + cur_entry.size)
+		if (region.start > region_end)
 			return;
 
-		/* Reduce size by any delta from the original address. */
-		region.size -= region.start - start_orig;
+		region.size = region_end - region.start;
 
-		/* Return if region can't contain decompressed kernel */
+		/* Return if region can't contain decompressed kernel. */
 		if (region.size < image_size)
 			return;
 
@@ -668,27 +650,19 @@ static void __process_mem_region(struct mem_vector *entry,
 		}
 
 		/* Store beginning of region if holds at least image_size. */
-		if (overlap.start > region.start + image_size) {
-			struct mem_vector beginning;
-
-			beginning.start = region.start;
-			beginning.size = overlap.start - region.start;
-			process_gb_huge_pages(&beginning, image_size);
+		if (overlap.start >= region.start + image_size) {
+			region.size = overlap.start - region.start;
+			process_gb_huge_pages(&region, image_size);
 		}
 
-		/* Return if overlap extends to or past end of region. */
-		if (overlap.start + overlap.size >= region.start + region.size)
-			return;
-
 		/* Clip off the overlapping region and start over. */
-		region.size -= overlap.start - region.start + overlap.size;
 		region.start = overlap.start + overlap.size;
 	}
 }
 
 static bool process_mem_region(struct mem_vector *region,
-			       unsigned long long minimum,
-			       unsigned long long image_size)
+			       unsigned long minimum,
+			       unsigned long image_size)
 {
 	int i;
 	/*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (13 preceding siblings ...)
  2020-07-27 23:07 ` [PATCH v2 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
@ 2020-07-27 23:07 ` Arvind Sankar
  2020-07-28 19:27   ` Kees Cook
  2020-07-27 23:08 ` [PATCH v2 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
  2020-07-27 23:08 ` [PATCH v2 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:07 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Short-circuit the whole function on 32-bit.

Replace the loop to determine the number of 1Gb pages with arithmetic.

Fix one minor bug: if the end of the region is aligned on a 1Gb
boundary, the current code will not use the last available 1Gb page due
to an off-by-one error.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 48 ++++++++++++++------------------
 1 file changed, 21 insertions(+), 27 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d3d8f1f6d5d3..91c5f9771f42 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -546,49 +546,43 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 static void
 process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 {
-	unsigned long addr, size = 0;
+	unsigned long pud_start, pud_end, gb_huge_pages;
 	struct mem_vector tmp;
-	int i = 0;
 
-	if (!max_gb_huge_pages) {
+	if (IS_ENABLED(CONFIG_X86_32) || !max_gb_huge_pages) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	addr = ALIGN(region->start, PUD_SIZE);
-	/* Did we raise the address above the passed in memory entry? */
-	if (addr < region->start + region->size)
-		size = region->size - (addr - region->start);
-
-	/* Check how many 1GB huge pages can be filtered out: */
-	while (size > PUD_SIZE && max_gb_huge_pages) {
-		size -= PUD_SIZE;
-		max_gb_huge_pages--;
-		i++;
-	}
+	/* Are there any 1GB pages in the region? */
+	pud_start = ALIGN(region->start, PUD_SIZE);
+	pud_end = ALIGN_DOWN(region->start + region->size, PUD_SIZE);
 
 	/* No good 1GB huge pages found: */
-	if (!i) {
+	if (pud_start >= pud_end) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	/*
-	 * Skip those 'i'*1GB good huge pages, and continue checking and
-	 * processing the remaining head or tail part of the passed region
-	 * if available.
-	 */
-
-	if (addr >= region->start + image_size) {
+	/* Check if the head part of the region is usable. */
+	if (pud_start - region->start >= image_size) {
 		tmp.start = region->start;
-		tmp.size = addr - region->start;
+		tmp.size = pud_start - region->start;
 		store_slot_info(&tmp, image_size);
 	}
 
-	size  = region->size - (addr - region->start) - i * PUD_SIZE;
-	if (size >= image_size) {
-		tmp.start = addr + i * PUD_SIZE;
-		tmp.size = size;
+	/* Skip the good 1GB pages. */
+	gb_huge_pages = (pud_end - pud_start) >> PUD_SHIFT;
+	if (gb_huge_pages > max_gb_huge_pages) {
+		pud_end = pud_start + (max_gb_huge_pages << PUD_SHIFT);
+		max_gb_huge_pages = 0;
+	} else
+		max_gb_huge_pages -= gb_huge_pages;
+
+	/* Check if the tail part of the region is usable. */
+	if (region->start + region->size - pud_end >= image_size) {
+		tmp.start = pud_end;
+		tmp.size = region->start + region->size - pud_end;
 		store_slot_info(&tmp, image_size);
 	}
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 7/8] x86/kaslr: Clean up slot handling
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (14 preceding siblings ...)
  2020-07-27 23:07 ` [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
@ 2020-07-27 23:08 ` Arvind Sankar
  2020-07-28 19:34   ` Kees Cook
  2020-07-27 23:08 ` [PATCH v2 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:08 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

The number of slots and slot areas can be unsigned int, since on 64-bit,
the maximum amount of memory is 2^52, the minimum alignment is 2^21, so
the slot number cannot be greater than 2^31. The slot areas are limited
by MAX_SLOT_AREA, currently 100. Replace the type used for slot number,
which is currently a mix of int and unsigned long, with unsigned int
consistently.

Drop unnecessary check that number of slots is not zero in
store_slot_info, it's guaranteed to be at least 1 by the calculation.

Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in
find_random_virt_addr, it cannot change the result: the largest valid
slot is the largest n that satisfies

  minimum + n * CONFIG_PHYSICAL_ALIGN + image_size <= KERNEL_IMAGE_SIZE

(since minimum is already aligned) and so n is equal to

  (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN

even if image_size is not aligned to CONFIG_PHYSICAL_ALIGN.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 36 ++++++++++++--------------------
 1 file changed, 13 insertions(+), 23 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 91c5f9771f42..eca2acc65e2a 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -511,16 +511,14 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 
 struct slot_area {
 	unsigned long addr;
-	int num;
+	unsigned int num;
 };
 
 #define MAX_SLOT_AREA 100
 
 static struct slot_area slot_areas[MAX_SLOT_AREA];
-
-static unsigned long slot_max;
-
-static unsigned long slot_area_index;
+static unsigned int slot_area_index;
+static unsigned int slot_max;
 
 static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 {
@@ -530,13 +528,10 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 		return;
 
 	slot_area.addr = region->start;
-	slot_area.num = (region->size - image_size) /
-			CONFIG_PHYSICAL_ALIGN + 1;
+	slot_area.num = 1 + (region->size - image_size) / CONFIG_PHYSICAL_ALIGN;
 
-	if (slot_area.num > 0) {
-		slot_areas[slot_area_index++] = slot_area;
-		slot_max += slot_area.num;
-	}
+	slot_areas[slot_area_index++] = slot_area;
+	slot_max += slot_area.num;
 }
 
 /*
@@ -589,8 +584,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 
 static unsigned long slots_fetch_random(void)
 {
-	unsigned long slot;
-	int i;
+	unsigned int slot, i;
 
 	/* Handle case of no slots stored. */
 	if (slot_max == 0)
@@ -603,7 +597,7 @@ static unsigned long slots_fetch_random(void)
 			slot -= slot_areas[i].num;
 			continue;
 		}
-		return slot_areas[i].addr + slot * CONFIG_PHYSICAL_ALIGN;
+		return slot_areas[i].addr + (unsigned long)slot * CONFIG_PHYSICAL_ALIGN;
 	}
 
 	if (i == slot_area_index)
@@ -819,28 +813,24 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
 		return 0;
 	}
 
-	if (process_efi_entries(minimum, image_size))
-		return slots_fetch_random();
+	if (!process_efi_entries(minimum, image_size))
+		process_e820_entries(minimum, image_size);
 
-	process_e820_entries(minimum, image_size);
 	return slots_fetch_random();
 }
 
 static unsigned long find_random_virt_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
-	unsigned long slots, random_addr;
-
-	/* Align image_size for easy slot calculations. */
-	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
+	unsigned int slots;
+	unsigned long random_addr;
 
 	/*
 	 * There are how many CONFIG_PHYSICAL_ALIGN-sized slots
 	 * that can hold image_size within the range of minimum to
 	 * KERNEL_IMAGE_SIZE?
 	 */
-	slots = (KERNEL_IMAGE_SIZE - minimum - image_size) /
-		 CONFIG_PHYSICAL_ALIGN + 1;
+	slots = 1 + (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN;
 
 	random_addr = kaslr_get_random_long("Virtual") % slots;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v2 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel
  2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                   ` (15 preceding siblings ...)
  2020-07-27 23:08 ` [PATCH v2 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
@ 2020-07-27 23:08 ` Arvind Sankar
  2020-07-28 19:37   ` Kees Cook
  16 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-27 23:08 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Commit
  f28442497b5c ("x86/boot: Fix KASLR and memmap= collision")
converted mem_vector type to use 64-bit on the 32-bit kernel as well,
based on Thomas's review [0]. However:
- the code still doesn't consistently use 64-bit types. For instance,
  mem_avoid_overlap uses 32-bit types when checking for overlaps.  This
  is actually okay, as the passed in memory range will have been clipped
  to below 4G, but it is difficult to reason about the safety of the
  code.
- KASLR on 32-bit can't use memory above 4G anyway, so it's pointless
  to keep track of ranges above 4G.

Switch the type back to unsigned long, and use a helper function to clip
regions to 4G on 32-bit, when storing mem_avoid, immovable_mem, EFI,
E820 and command-line memory regions.

[0] https://lore.kernel.org/linux-nvdimm/alpine.DEB.2.20.1701111246400.3579@nanos/

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/acpi.c  |  7 +++----
 arch/x86/boot/compressed/kaslr.c | 25 ++++++++++---------------
 arch/x86/boot/compressed/misc.h  | 19 +++++++++++++++++--
 3 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/arch/x86/boot/compressed/acpi.c b/arch/x86/boot/compressed/acpi.c
index 8bcbcee54aa1..d09e0ec5637e 100644
--- a/arch/x86/boot/compressed/acpi.c
+++ b/arch/x86/boot/compressed/acpi.c
@@ -106,7 +106,7 @@ static acpi_physical_address kexec_get_rsdp_addr(void)
 	}
 
 	/* Get systab from boot params. */
-	systab = (efi_system_table_64_t *) (ei->efi_systab | ((__u64)ei->efi_systab_hi << 32));
+	systab = (efi_system_table_64_t *) (ei->efi_systab | ((u64)ei->efi_systab_hi << 32));
 	if (!systab)
 		error("EFI system table not found in kexec boot_params.");
 
@@ -139,7 +139,7 @@ static acpi_physical_address efi_get_rsdp_addr(void)
 
 	/* Get systab from boot params. */
 #ifdef CONFIG_X86_64
-	systab = ei->efi_systab | ((__u64)ei->efi_systab_hi << 32);
+	systab = ei->efi_systab | ((u64)ei->efi_systab_hi << 32);
 #else
 	if (ei->efi_systab_hi || ei->efi_memmap_hi) {
 		debug_putstr("Error getting RSDP address: EFI system table located above 4GB.\n");
@@ -404,8 +404,7 @@ int count_immovable_mem_regions(void)
 
 			ma = (struct acpi_srat_mem_affinity *)sub_table;
 			if (!(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) && ma->length) {
-				immovable_mem[num].start = ma->base_address;
-				immovable_mem[num].size = ma->length;
+				immovable_mem[num] = mem_vector_ctor(ma->base_address, ma->length);
 				num++;
 			}
 
diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index eca2acc65e2a..8c44041ae9cb 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -98,7 +98,7 @@ static bool memmap_too_large;
  * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
  * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
  */
-static unsigned long long mem_limit;
+static unsigned long mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -141,8 +141,7 @@ enum parse_mode {
 };
 
 static int
-parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
-		enum parse_mode mode)
+parse_memmap(char *p, u64 *start, u64 *size, enum parse_mode mode)
 {
 	char *oldp;
 
@@ -172,7 +171,7 @@ parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
 			 */
 			*size = 0;
 		} else {
-			unsigned long long flags;
+			u64 flags;
 
 			/*
 			 * efi_fake_mem=nn@ss:attr the attr specifies
@@ -211,7 +210,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 	while (str && (i < MAX_MEMMAP_REGIONS)) {
 		int rc;
-		unsigned long long start, size;
+		u64 start, size;
 		char *k = strchr(str, ',');
 
 		if (k)
@@ -230,8 +229,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 			continue;
 		}
 
-		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].start = start;
-		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].size = size;
+		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i] = mem_vector_ctor(start, size);
 		i++;
 	}
 
@@ -422,8 +420,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	initrd_start |= boot_params->hdr.ramdisk_image;
 	initrd_size  = (u64)boot_params->ext_ramdisk_size << 32;
 	initrd_size |= boot_params->hdr.ramdisk_size;
-	mem_avoid[MEM_AVOID_INITRD].start = initrd_start;
-	mem_avoid[MEM_AVOID_INITRD].size = initrd_size;
+	mem_avoid[MEM_AVOID_INITRD] = mem_vector_ctor(initrd_start, initrd_size);
 	/* No need to set mapping for initrd, it will be handled in VO. */
 
 	/* Avoid kernel command line. */
@@ -673,7 +670,7 @@ static bool process_mem_region(struct mem_vector *region,
 	 * immovable memory and @region.
 	 */
 	for (i = 0; i < num_immovable_mem; i++) {
-		unsigned long long start, end, entry_end, region_end;
+		unsigned long start, end, entry_end, region_end;
 		struct mem_vector entry;
 
 		if (!mem_overlaps(region, &immovable_mem[i]))
@@ -728,7 +725,7 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
 	}
 	pmap =  e->efi_memmap;
 #else
-	pmap = (e->efi_memmap | ((__u64)e->efi_memmap_hi << 32));
+	pmap = (e->efi_memmap | ((u64)e->efi_memmap_hi << 32));
 #endif
 
 	nr_desc = e->efi_memmap_size / e->efi_memdesc_size;
@@ -765,8 +762,7 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
 		    !(md->attribute & EFI_MEMORY_MORE_RELIABLE))
 			continue;
 
-		region.start = md->phys_addr;
-		region.size = md->num_pages << EFI_PAGE_SHIFT;
+		region = mem_vector_ctor(md->phys_addr, md->num_pages << EFI_PAGE_SHIFT);
 		if (process_mem_region(&region, minimum, image_size))
 			break;
 	}
@@ -793,8 +789,7 @@ static void process_e820_entries(unsigned long minimum,
 		/* Skip non-RAM entries. */
 		if (entry->type != E820_TYPE_RAM)
 			continue;
-		region.start = entry->addr;
-		region.size = entry->size;
+		region = mem_vector_ctor(entry->addr, entry->size);
 		if (process_mem_region(&region, minimum, image_size))
 			break;
 	}
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 726e264410ff..fd7c49ab0f02 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -70,10 +70,25 @@ int cmdline_find_option(const char *option, char *buffer, int bufsize);
 int cmdline_find_option_bool(const char *option);
 
 struct mem_vector {
-	unsigned long long start;
-	unsigned long long size;
+	unsigned long start;
+	unsigned long size;
 };
 
+static inline
+struct mem_vector mem_vector_ctor(u64 start, u64 size)
+{
+	u64 end = start + size;
+	struct mem_vector ret;
+
+	start = min_t(u64, start, ULONG_MAX);
+	end   = min_t(u64, end,   ULONG_MAX);
+
+	ret.start = (unsigned long)start;
+	ret.size  = (unsigned long)(end - start);
+
+	return ret;
+}
+
 #if CONFIG_RANDOMIZE_BASE
 /* kaslr.c */
 void choose_random_location(unsigned long input,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* Re: [PATCH 5/8] x86/kaslr: Simplify __process_mem_region
  2020-07-27 21:50 ` [PATCH 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
@ 2020-07-28 10:57   ` Ingo Molnar
  0 siblings, 0 replies; 91+ messages in thread
From: Ingo Molnar @ 2020-07-28 10:57 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: Kees Cook, x86, linux-kernel


* Arvind Sankar <nivedita@alum.mit.edu> wrote:

> Get rid of unnecessary temporary variables and redundant tests in
> __process_mem_region.
> 
> Fix one minor bug: in case of an overlap, the beginning of the region
> should be used even if it is exactly image_size, not just strictly
> larger.
> 
> Change type of minimum/image_size arguments in process_mem_region to
> unsigned long. These actually can never be above 4G (even on x86_64),
> and they're unsigned long in every other function except this one.

Please split this up into 3 separate patches. The KASLR code is 
difficult to debug, so we want to approach changes as conservatively 
as possible. This is also reinforced by the fact that this patch was 
buggy in v1 already.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH 6/8] x86/kaslr: Simplify process_gb_huge_pages
  2020-07-27 21:50 ` [PATCH 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
@ 2020-07-28 10:58   ` Ingo Molnar
  0 siblings, 0 replies; 91+ messages in thread
From: Ingo Molnar @ 2020-07-28 10:58 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: Kees Cook, x86, linux-kernel


* Arvind Sankar <nivedita@alum.mit.edu> wrote:

> Short-circuit the whole function on 32-bit.
> 
> Replace the loop to determine the number of 1Gb pages with arithmetic.
> 
> Fix one minor bug: if the end of the region is aligned on a 1Gb
> boundary, the current code will not use the last available 1Gb page due
> to an off-by-one error.

Please split this up into two patches: first the minor bugfix, and 
then the cleanup which will then be an invariant patch, in principle.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH 7/8] x86/kaslr: Clean up slot handling
  2020-07-27 21:50 ` [PATCH 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
@ 2020-07-28 10:59   ` Ingo Molnar
  0 siblings, 0 replies; 91+ messages in thread
From: Ingo Molnar @ 2020-07-28 10:59 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: Kees Cook, x86, linux-kernel


* Arvind Sankar <nivedita@alum.mit.edu> wrote:

> The number of slots and slot areas can be unsigned int, since on 64-bit,
> the maximum amount of memory is 2^52, the minimum alignment is 2^21, so
> the slot number cannot be greater than 2^31. The slot areas are limited
> by MAX_SLOT_AREA, currently 100. Replace the type used for slot number,
> which is currently a mix of int and unsigned long, with unsigned int
> consistently.
> 
> Drop unnecessary check that number of slots is not zero in
> store_slot_info, it's guaranteed to be at least 1 by the calculation.
> 
> Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in
> find_random_virt_addr, it cannot change the result: the largest valid
> slot is the largest n that satisfies
> 
>   minimum + n * CONFIG_PHYSICAL_ALIGN + image_size <= KERNEL_IMAGE_SIZE
> 
> (since minimum is already aligned) and so n is equal to
> 
>   (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN
> 
> even if image_size is not aligned to CONFIG_PHYSICAL_ALIGN.

Please split this up into a couple of easily bisectable patches as 
well:

 - first one harmonizes the integer types used

 - patch that drops the unnecessary check

 - patch that drops the unnecessary alignment operation

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel
  2020-07-27 21:50 ` [PATCH 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
@ 2020-07-28 11:05   ` Ingo Molnar
  0 siblings, 0 replies; 91+ messages in thread
From: Ingo Molnar @ 2020-07-28 11:05 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: Kees Cook, x86, linux-kernel


* Arvind Sankar <nivedita@alum.mit.edu> wrote:

> Commit
>   f28442497b5c ("x86/boot: Fix KASLR and memmap= collision")
> converted mem_vector type to use 64-bit on the 32-bit kernel as well,
> based on Thomas's review [0]. However:
> - the code still doesn't consistently use 64-bit types. For instance,
>   mem_avoid_overlap uses 32-bit types when checking for overlaps.  This
>   is actually okay, as the passed in memory range will have been clipped
>   to below 4G, but it is difficult to reason about the safety of the
>   code.
> - KASLR on 32-bit can't use memory above 4G anyway, so it's pointless
>   to keep track of ranges above 4G.
> 
> Switch the type back to unsigned long, and use a helper function to clip
> regions to 4G on 32-bit, when storing mem_avoid, immovable_mem, EFI,
> E820 and command-line memory regions.
> 
> [0] https://lore.kernel.org/linux-nvdimm/alpine.DEB.2.20.1701111246400.3579@nanos/

So why not just standardize all the calculations around u64, and for 
32-bit kernels add in a debug check that double checks that it all 
remained within the first 32 bits?

It's much easier to argue about this if we don't have to worry about 
32-bit overflows and if the types are simple and bitness invariant.

Yes, technically it's slightly bloated on 32-bit kernels, but 
maintainability of 32-bit kernels is the primary concern now ...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
@ 2020-07-28 11:06   ` Ingo Molnar
  2020-07-28 14:24     ` Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 00/21] " Arvind Sankar
                     ` (21 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Ingo Molnar @ 2020-07-28 11:06 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: Kees Cook, x86, linux-kernel


* Arvind Sankar <nivedita@alum.mit.edu> wrote:

> The first 7 patches are cleanup and minor bugfixes on the x86 KASLR
> code.
> 
> The last one is a bit of an RFC. The memory regions used for KASLR are
> stored as 64-bit even on a 32-bit kernel. However there are still a few
> local variables that are 32-bit, but should be ok as far as I can see
> because they are assigned values that have been already limited to
> 32-bit. It does make it a little harder to verify that the code is
> correct. Since KASLR cannot actually use 64-bit regions for the 32-bit
> kernel, the patch reduces regions to their below-4G segment when
> creating them, making the actual variables 32-bit. Alternatively, the
> few local variables could be upgraded to 64-bit.
> 
> v1->v2:
> - Fix a bug in the bugfix 5/8: overlap.start can be smaller than
>   region.start, so shouldn't subtract before comparing.
> 
> Arvind Sankar (8):
>   x86/kaslr: Make command line handling safer
>   x86/kaslr: Remove bogus warning and unnecessary goto
>   x86/kaslr: Fix process_efi_entries comment
>   x86/kaslr: Initialize mem_limit to the real maximum address
>   x86/kaslr: Simplify __process_mem_region
>   x86/kaslr: Simplify process_gb_huge_pages
>   x86/kaslr: Clean up slot handling
>   x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel
> 
>  arch/x86/boot/compressed/acpi.c  |   7 +-
>  arch/x86/boot/compressed/kaslr.c | 228 ++++++++++++-------------------
>  arch/x86/boot/compressed/misc.h  |  19 ++-
>  3 files changed, 111 insertions(+), 143 deletions(-)

I've applied patches 1-4 to x86/kaslr and will push them out if they 
pass testing - see the review feedback for the others.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Initialize mem_limit to the real maximum address
  2020-07-27 23:07 ` [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
@ 2020-07-28 12:29   ` tip-bot2 for Arvind Sankar
  2020-07-28 18:49   ` [PATCH v2 4/8] " Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-07-28 12:29 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     7f104ea54e69ec45cbc4948cdb7d7f74d46def6d
Gitweb:        https://git.kernel.org/tip/7f104ea54e69ec45cbc4948cdb7d7f74d46def6d
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:57 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 28 Jul 2020 12:54:43 +02:00

x86/kaslr: Initialize mem_limit to the real maximum address

On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level
paging or 4PiB with 5-level paging). This is currently not enforced by
KASLR, which thus implicitly relies on physical memory being limited to
less than 64TiB.

On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by
special checks in __process_mem_region().

Initialize mem_limit to the maximum (depending on architecture), instead
of ULLONG_MAX, and make sure the command-line arguments can only
decrease it. This makes the enforcement explicit on 64-bit, and
eliminates the 32-bit specific checks to keep the kernel below 512M.

Check upfront to make sure the minimum address is below the limit before
doing any work.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-5-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 41 ++++++++++++++++---------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 207fcb7..758d784 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -94,8 +94,11 @@ static unsigned long get_boot_seed(void)
 static bool memmap_too_large;
 
 
-/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
-static unsigned long long mem_limit = ULLONG_MAX;
+/*
+ * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
+ * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
+ */
+static unsigned long long mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -221,7 +224,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 		if (start == 0) {
 			/* Store the specified memory limit if size > 0 */
-			if (size > 0)
+			if (size > 0 && size < mem_limit)
 				mem_limit = size;
 
 			continue;
@@ -311,7 +314,8 @@ static void handle_mem_options(void)
 			if (mem_size == 0)
 				break;
 
-			mem_limit = mem_size;
+			if (mem_size < mem_limit)
+				mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
 			mem_avoid_memmap(PARSE_EFI, val);
 		}
@@ -322,7 +326,9 @@ static void handle_mem_options(void)
 }
 
 /*
- * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
+ * In theory, KASLR can put the kernel anywhere in the range of [16M, MAXMEM)
+ * on 64-bit, and [16M, KERNEL_IMAGE_SIZE) on 32-bit.
+ *
  * The mem_avoid array is used to store the ranges that need to be avoided
  * when KASLR searches for an appropriate random address. We must avoid any
  * regions that are unsafe to overlap with during decompression, and other
@@ -619,10 +625,6 @@ static void __process_mem_region(struct mem_vector *entry,
 	unsigned long start_orig, end;
 	struct mem_vector cur_entry;
 
-	/* On 32-bit, ignore entries entirely above our maximum. */
-	if (IS_ENABLED(CONFIG_X86_32) && entry->start >= KERNEL_IMAGE_SIZE)
-		return;
-
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
 		return;
@@ -655,11 +657,6 @@ static void __process_mem_region(struct mem_vector *entry,
 		/* Reduce size by any delta from the original address. */
 		region.size -= region.start - start_orig;
 
-		/* On 32-bit, reduce region size to fit within max size. */
-		if (IS_ENABLED(CONFIG_X86_32) &&
-		    region.start + region.size > KERNEL_IMAGE_SIZE)
-			region.size = KERNEL_IMAGE_SIZE - region.start;
-
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
 			return;
@@ -844,15 +841,16 @@ static void process_e820_entries(unsigned long minimum,
 static unsigned long find_random_phys_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
+	/* Bail out early if it's impossible to succeed. */
+	if (minimum + image_size > mem_limit)
+		return 0;
+
 	/* Check if we had too many memmaps. */
 	if (memmap_too_large) {
 		debug_putstr("Aborted memory entries scan (more than 4 memmap= args)!\n");
 		return 0;
 	}
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
-
 	if (process_efi_entries(minimum, image_size))
 		return slots_fetch_random();
 
@@ -865,8 +863,6 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
 {
 	unsigned long slots, random_addr;
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
 	/* Align image_size for easy slot calculations. */
 	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
 
@@ -913,6 +909,11 @@ void choose_random_location(unsigned long input,
 	/* Prepare to add new identity pagetables on demand. */
 	initialize_identity_maps();
 
+	if (IS_ENABLED(CONFIG_X86_32))
+		mem_limit = KERNEL_IMAGE_SIZE;
+	else
+		mem_limit = MAXMEM;
+
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init(input, input_size, *output);
 
@@ -922,6 +923,8 @@ void choose_random_location(unsigned long input,
 	 * location:
 	 */
 	min_addr = min(*output, 512UL << 20);
+	/* Make sure minimum is aligned. */
+	min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
 
 	/* Walk available memory entries to find a random address. */
 	random_addr = find_random_phys_addr(min_addr, output_size);

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Fix process_efi_entries comment
  2020-07-27 23:07 ` [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
@ 2020-07-28 12:29   ` tip-bot2 for Arvind Sankar
  2020-07-28 18:27   ` [PATCH v2 3/8] " Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-07-28 12:29 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     a68bcea591a040cce5e08615c829a316beb3e4b4
Gitweb:        https://git.kernel.org/tip/a68bcea591a040cce5e08615c829a316beb3e4b4
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:56 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 28 Jul 2020 12:54:43 +02:00

x86/kaslr: Fix process_efi_entries comment

Since commit:

  0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice")

process_efi_entries() will return true if we have an EFI memmap, not just
if it contained EFI_MEMORY_MORE_RELIABLE regions.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-4-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 21cd9e0..207fcb7 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -741,8 +741,8 @@ static bool process_mem_region(struct mem_vector *region,
 
 #ifdef CONFIG_EFI
 /*
- * Returns true if mirror region found (and must have been processed
- * for slots adding)
+ * Returns true if we processed the EFI memmap, which we prefer over the E820
+ * table if it is available.
  */
 static bool
 process_efi_entries(unsigned long minimum, unsigned long image_size)

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Remove bogus warning and unnecessary goto
  2020-07-27 23:07 ` [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
@ 2020-07-28 12:29   ` tip-bot2 for Arvind Sankar
  2020-07-28 18:26   ` [PATCH v2 2/8] " Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-07-28 12:29 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     237069512d18f0f20b230680f067a2c070a7903a
Gitweb:        https://git.kernel.org/tip/237069512d18f0f20b230680f067a2c070a7903a
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:55 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 28 Jul 2020 12:54:42 +02:00

x86/kaslr: Remove bogus warning and unnecessary goto

Drop the warning on seeing "--" in handle_mem_options(). This will trigger
whenever one of the memory options is present in the command line
together with "--", but there's no problem if that is the case.

Replace goto with break.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-3-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c |  9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index a4af896..21cd9e0 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -295,10 +295,8 @@ static void handle_mem_options(void)
 	while (*args) {
 		args = next_arg(args, &param, &val);
 		/* Stop at -- */
-		if (!val && strcmp(param, "--") == 0) {
-			warn("Only '--' specified in cmdline");
-			goto out;
-		}
+		if (!val && strcmp(param, "--") == 0)
+			break;
 
 		if (!strcmp(param, "memmap")) {
 			mem_avoid_memmap(PARSE_MEMMAP, val);
@@ -311,7 +309,7 @@ static void handle_mem_options(void)
 				continue;
 			mem_size = memparse(p, &p);
 			if (mem_size == 0)
-				goto out;
+				break;
 
 			mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
@@ -319,7 +317,6 @@ static void handle_mem_options(void)
 		}
 	}
 
-out:
 	free(tmp_cmdline);
 	return;
 }

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Make command line handling safer
  2020-07-27 23:07 ` [PATCH v2 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
@ 2020-07-28 12:29   ` tip-bot2 for Arvind Sankar
  2020-07-28 18:26   ` [PATCH v2 1/8] " Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-07-28 12:29 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     c8465b03acf9d4bb43b9714b6cfb99442defe664
Gitweb:        https://git.kernel.org/tip/c8465b03acf9d4bb43b9714b6cfb99442defe664
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:54 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Tue, 28 Jul 2020 12:54:42 +02:00

x86/kaslr: Make command line handling safer

Handle the possibility that the command line is NULL.

Replace open-coded strlen with a function call.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-2-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d7408af..a4af896 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -268,15 +268,19 @@ static void parse_gb_huge_pages(char *param, char *val)
 static void handle_mem_options(void)
 {
 	char *args = (char *)get_cmd_line_ptr();
-	size_t len = strlen((char *)args);
+	size_t len;
 	char *tmp_cmdline;
 	char *param, *val;
 	u64 mem_size;
 
+	if (!args)
+		return;
+
 	if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
 		!strstr(args, "hugepages"))
 		return;
 
+	len = strlen(args);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
 		error("Failed to allocate space for tmp_cmdline");
@@ -399,8 +403,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 {
 	unsigned long init_size = boot_params->hdr.init_size;
 	u64 initrd_start, initrd_size;
-	u64 cmd_line, cmd_line_size;
-	char *ptr;
+	unsigned long cmd_line, cmd_line_size = 0;
 
 	/*
 	 * Avoid the region that is unsafe to overlap during
@@ -421,12 +424,10 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	/* No need to set mapping for initrd, it will be handled in VO. */
 
 	/* Avoid kernel command line. */
-	cmd_line  = (u64)boot_params->ext_cmd_line_ptr << 32;
-	cmd_line |= boot_params->hdr.cmd_line_ptr;
+	cmd_line = get_cmd_line_ptr();
 	/* Calculate size of cmd_line. */
-	ptr = (char *)(unsigned long)cmd_line;
-	for (cmd_line_size = 0; ptr[cmd_line_size++];)
-		;
+	if (cmd_line)
+		cmd_line_size = strlen((char *)cmd_line);
 	mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
 	mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
 	add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes
  2020-07-28 11:06   ` Ingo Molnar
@ 2020-07-28 14:24     ` Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 14:24 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Arvind Sankar, Kees Cook, x86, linux-kernel

On Tue, Jul 28, 2020 at 01:06:17PM +0200, Ingo Molnar wrote:
> 
> * Arvind Sankar <nivedita@alum.mit.edu> wrote:
> 
> > The first 7 patches are cleanup and minor bugfixes on the x86 KASLR
> > code.
> > 
> > The last one is a bit of an RFC. The memory regions used for KASLR are
> > stored as 64-bit even on a 32-bit kernel. However there are still a few
> > local variables that are 32-bit, but should be ok as far as I can see
> > because they are assigned values that have been already limited to
> > 32-bit. It does make it a little harder to verify that the code is
> > correct. Since KASLR cannot actually use 64-bit regions for the 32-bit
> > kernel, the patch reduces regions to their below-4G segment when
> > creating them, making the actual variables 32-bit. Alternatively, the
> > few local variables could be upgraded to 64-bit.
> > 
> > v1->v2:
> > - Fix a bug in the bugfix 5/8: overlap.start can be smaller than
> >   region.start, so shouldn't subtract before comparing.
> > 
> > Arvind Sankar (8):
> >   x86/kaslr: Make command line handling safer
> >   x86/kaslr: Remove bogus warning and unnecessary goto
> >   x86/kaslr: Fix process_efi_entries comment
> >   x86/kaslr: Initialize mem_limit to the real maximum address
> >   x86/kaslr: Simplify __process_mem_region
> >   x86/kaslr: Simplify process_gb_huge_pages
> >   x86/kaslr: Clean up slot handling
> >   x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel
> > 
> >  arch/x86/boot/compressed/acpi.c  |   7 +-
> >  arch/x86/boot/compressed/kaslr.c | 228 ++++++++++++-------------------
> >  arch/x86/boot/compressed/misc.h  |  19 ++-
> >  3 files changed, 111 insertions(+), 143 deletions(-)
> 
> I've applied patches 1-4 to x86/kaslr and will push them out if they 
> pass testing - see the review feedback for the others.
> 
> Thanks,
> 
> 	Ingo

Ok, thanks, I will send them split out and replace the last patch.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 1/8] x86/kaslr: Make command line handling safer
  2020-07-27 23:07 ` [PATCH v2 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
@ 2020-07-28 18:26   ` Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: Kees Cook @ 2020-07-28 18:26 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:07:54PM -0400, Arvind Sankar wrote:
> Handle the possibility that the command line is NULL.
> 
> Replace open-coded strlen with a function call.
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>

Reviewed-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto
  2020-07-27 23:07 ` [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
@ 2020-07-28 18:26   ` Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: Kees Cook @ 2020-07-28 18:26 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:07:55PM -0400, Arvind Sankar wrote:
> Drop the warning on seeing "--" in handle_mem_options. This will trigger
> whenever one of the memory options is present in the command line
> together with "--", but there's no problem if that is the case.
> 
> Replace goto with break.
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>

Reviewed-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment
  2020-07-27 23:07 ` [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
@ 2020-07-28 18:27   ` Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: Kees Cook @ 2020-07-28 18:27 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:07:56PM -0400, Arvind Sankar wrote:
> Since commit
>   0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice")
> process_efi_entries will return true if we have an EFI memmap, not just
> if it contained EFI_MEMORY_MORE_RELIABLE regions.
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>

Reviewed-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address
  2020-07-27 23:07 ` [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
@ 2020-07-28 18:49   ` Kees Cook
  2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: Kees Cook @ 2020-07-28 18:49 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:07:57PM -0400, Arvind Sankar wrote:
> On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level
> paging or 4PiB with 5-level paging). This is currently not enforced by
> KASLR, which thus implicitly relies on physical memory being limited to
> less than 64TiB.
> 
> On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by
> special checks in __process_mem_region.
> 
> Initialize mem_limit to the maximum (depending on architecture), instead
> of ULLONG_MAX, and make sure the command-line arguments can only
> decrease it. This makes the enforcement explicit on 64-bit, and
> eliminates the 32-bit specific checks to keep the kernel below 512M.
> 
> Check upfront to make sure the minimum address is below the limit before
> doing any work.
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>

Acked-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 5/8] x86/kaslr: Simplify __process_mem_region
  2020-07-27 23:07 ` [PATCH v2 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
@ 2020-07-28 19:25   ` Kees Cook
  2020-07-28 19:42     ` Arvind Sankar
  0 siblings, 1 reply; 91+ messages in thread
From: Kees Cook @ 2020-07-28 19:25 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:07:58PM -0400, Arvind Sankar wrote:
> Get rid of unnecessary temporary variables and redundant tests in
> __process_mem_region.
> 
> Fix one minor bug: in case of an overlap, the beginning of the region
> should be used even if it is exactly image_size, not just strictly
> larger.
> 
> Change type of minimum/image_size arguments in process_mem_region to
> unsigned long. These actually can never be above 4G (even on x86_64),
> and they're unsigned long in every other function except this one.
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>

Please split this up (which I think is already planned):

- bug fix (which looks like a correct fix to me)
- arg type size changes
- refactoring

I don't currently agree that the refactoring makes things easier to
read, but let's see v3. :)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages
  2020-07-27 23:07 ` [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
@ 2020-07-28 19:27   ` Kees Cook
  2020-07-28 19:45     ` Arvind Sankar
  0 siblings, 1 reply; 91+ messages in thread
From: Kees Cook @ 2020-07-28 19:27 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:07:59PM -0400, Arvind Sankar wrote:
> Short-circuit the whole function on 32-bit.
> 
> Replace the loop to determine the number of 1Gb pages with arithmetic.
> 
> Fix one minor bug: if the end of the region is aligned on a 1Gb
> boundary, the current code will not use the last available 1Gb page due
> to an off-by-one error.
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>

Can you add some KUnit tests could be written to do validation of the
refactorings? Touching this code is so painful. :)

-Kees

> +++ b/arch/x86/boot/compressed/kaslr.c
> @@ -546,49 +546,43 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
>  static void
>  process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
>  {
> -	unsigned long addr, size = 0;
> +	unsigned long pud_start, pud_end, gb_huge_pages;
>  	struct mem_vector tmp;
> -	int i = 0;
>  
> -	if (!max_gb_huge_pages) {
> +	if (IS_ENABLED(CONFIG_X86_32) || !max_gb_huge_pages) {
>  		store_slot_info(region, image_size);
>  		return;
>  	}

Won't max_gb_huge_pages always be false for 32-bit?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 7/8] x86/kaslr: Clean up slot handling
  2020-07-27 23:08 ` [PATCH v2 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
@ 2020-07-28 19:34   ` Kees Cook
  2020-07-28 20:04     ` Arvind Sankar
  0 siblings, 1 reply; 91+ messages in thread
From: Kees Cook @ 2020-07-28 19:34 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:08:00PM -0400, Arvind Sankar wrote:
> The number of slots and slot areas can be unsigned int, since on 64-bit,
> the maximum amount of memory is 2^52, the minimum alignment is 2^21, so
> the slot number cannot be greater than 2^31. The slot areas are limited
> by MAX_SLOT_AREA, currently 100. Replace the type used for slot number,
> which is currently a mix of int and unsigned long, with unsigned int
> consistently.

I think it's good to standardize the type, but let's go to unsigned long
then we don't have to think about this again in the future.

> Drop unnecessary check that number of slots is not zero in
> store_slot_info, it's guaranteed to be at least 1 by the calculation.
> 
> Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in
> find_random_virt_addr, it cannot change the result: the largest valid
> slot is the largest n that satisfies

I view all of these things as robustness checks. It doesn't hurt to do
these checks and it'll avoid crashing into problems if future
refactoring breaks assumptions.

But again, let's split this patch up. type changes, refactoring, etc.

Notes below...

> 
>   minimum + n * CONFIG_PHYSICAL_ALIGN + image_size <= KERNEL_IMAGE_SIZE
> 
> (since minimum is already aligned) and so n is equal to
> 
>   (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN
> 
> even if image_size is not aligned to CONFIG_PHYSICAL_ALIGN.
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
> ---
>  arch/x86/boot/compressed/kaslr.c | 36 ++++++++++++--------------------
>  1 file changed, 13 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
> index 91c5f9771f42..eca2acc65e2a 100644
> --- a/arch/x86/boot/compressed/kaslr.c
> +++ b/arch/x86/boot/compressed/kaslr.c
> @@ -511,16 +511,14 @@ static bool mem_avoid_overlap(struct mem_vector *img,
>  
>  struct slot_area {
>  	unsigned long addr;
> -	int num;
> +	unsigned int num;
>  };
>  
>  #define MAX_SLOT_AREA 100
>  
>  static struct slot_area slot_areas[MAX_SLOT_AREA];
> -
> -static unsigned long slot_max;
> -
> -static unsigned long slot_area_index;
> +static unsigned int slot_area_index;
> +static unsigned int slot_max;
>  
>  static void store_slot_info(struct mem_vector *region, unsigned long image_size)
>  {
> @@ -530,13 +528,10 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
>  		return;
>  
>  	slot_area.addr = region->start;
> -	slot_area.num = (region->size - image_size) /
> -			CONFIG_PHYSICAL_ALIGN + 1;
> +	slot_area.num = 1 + (region->size - image_size) / CONFIG_PHYSICAL_ALIGN;
>  
> -	if (slot_area.num > 0) {
> -		slot_areas[slot_area_index++] = slot_area;
> -		slot_max += slot_area.num;
> -	}
> +	slot_areas[slot_area_index++] = slot_area;
> +	slot_max += slot_area.num;
>  }
>  
>  /*
> @@ -589,8 +584,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
>  
>  static unsigned long slots_fetch_random(void)
>  {
> -	unsigned long slot;
> -	int i;
> +	unsigned int slot, i;
>  
>  	/* Handle case of no slots stored. */
>  	if (slot_max == 0)
> @@ -603,7 +597,7 @@ static unsigned long slots_fetch_random(void)
>  			slot -= slot_areas[i].num;
>  			continue;
>  		}
> -		return slot_areas[i].addr + slot * CONFIG_PHYSICAL_ALIGN;
> +		return slot_areas[i].addr + (unsigned long)slot * CONFIG_PHYSICAL_ALIGN;
>  	}
>  
>  	if (i == slot_area_index)
> @@ -819,28 +813,24 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
>  		return 0;
>  	}
>  
> -	if (process_efi_entries(minimum, image_size))
> -		return slots_fetch_random();
> +	if (!process_efi_entries(minimum, image_size))
> +		process_e820_entries(minimum, image_size);

I like this change; the double-call to slots_fetch_random() bugged me.
:)

>  
> -	process_e820_entries(minimum, image_size);
>  	return slots_fetch_random();
>  }
>  
>  static unsigned long find_random_virt_addr(unsigned long minimum,
>  					   unsigned long image_size)
>  {
> -	unsigned long slots, random_addr;
> -
> -	/* Align image_size for easy slot calculations. */
> -	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
> +	unsigned int slots;
> +	unsigned long random_addr;
>  
>  	/*
>  	 * There are how many CONFIG_PHYSICAL_ALIGN-sized slots
>  	 * that can hold image_size within the range of minimum to
>  	 * KERNEL_IMAGE_SIZE?
>  	 */
> -	slots = (KERNEL_IMAGE_SIZE - minimum - image_size) /
> -		 CONFIG_PHYSICAL_ALIGN + 1;
> +	slots = 1 + (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN;

These are the same -- why change the code?

>  
>  	random_addr = kaslr_get_random_long("Virtual") % slots;
>  
> -- 
> 2.26.2
> 

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel
  2020-07-27 23:08 ` [PATCH v2 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
@ 2020-07-28 19:37   ` Kees Cook
  2020-07-28 20:08     ` Arvind Sankar
  0 siblings, 1 reply; 91+ messages in thread
From: Kees Cook @ 2020-07-28 19:37 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Mon, Jul 27, 2020 at 07:08:01PM -0400, Arvind Sankar wrote:
> Commit
>   f28442497b5c ("x86/boot: Fix KASLR and memmap= collision")
> converted mem_vector type to use 64-bit on the 32-bit kernel as well,
> based on Thomas's review [0]. However:
> - the code still doesn't consistently use 64-bit types. For instance,
>   mem_avoid_overlap uses 32-bit types when checking for overlaps.  This
>   is actually okay, as the passed in memory range will have been clipped
>   to below 4G, but it is difficult to reason about the safety of the
>   code.
> - KASLR on 32-bit can't use memory above 4G anyway, so it's pointless
>   to keep track of ranges above 4G.
> 
> Switch the type back to unsigned long, and use a helper function to clip
> regions to 4G on 32-bit, when storing mem_avoid, immovable_mem, EFI,
> E820 and command-line memory regions.

The reason for long long is to avoid having to check for overflows in
any of the offset calculations. Why not just leave this as-is?

> 
> [0] https://lore.kernel.org/linux-nvdimm/alpine.DEB.2.20.1701111246400.3579@nanos/
> 
> Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
> ---
>  arch/x86/boot/compressed/acpi.c  |  7 +++----
>  arch/x86/boot/compressed/kaslr.c | 25 ++++++++++---------------
>  arch/x86/boot/compressed/misc.h  | 19 +++++++++++++++++--
>  3 files changed, 30 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/x86/boot/compressed/acpi.c b/arch/x86/boot/compressed/acpi.c
> index 8bcbcee54aa1..d09e0ec5637e 100644
> --- a/arch/x86/boot/compressed/acpi.c
> +++ b/arch/x86/boot/compressed/acpi.c
> @@ -106,7 +106,7 @@ static acpi_physical_address kexec_get_rsdp_addr(void)
>  	}
>  
>  	/* Get systab from boot params. */
> -	systab = (efi_system_table_64_t *) (ei->efi_systab | ((__u64)ei->efi_systab_hi << 32));
> +	systab = (efi_system_table_64_t *) (ei->efi_systab | ((u64)ei->efi_systab_hi << 32));
>  	if (!systab)
>  		error("EFI system table not found in kexec boot_params.");
>  
> @@ -139,7 +139,7 @@ static acpi_physical_address efi_get_rsdp_addr(void)
>  
>  	/* Get systab from boot params. */
>  #ifdef CONFIG_X86_64
> -	systab = ei->efi_systab | ((__u64)ei->efi_systab_hi << 32);
> +	systab = ei->efi_systab | ((u64)ei->efi_systab_hi << 32);
>  #else
>  	if (ei->efi_systab_hi || ei->efi_memmap_hi) {
>  		debug_putstr("Error getting RSDP address: EFI system table located above 4GB.\n");
> @@ -404,8 +404,7 @@ int count_immovable_mem_regions(void)
>  
>  			ma = (struct acpi_srat_mem_affinity *)sub_table;
>  			if (!(ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE) && ma->length) {
> -				immovable_mem[num].start = ma->base_address;
> -				immovable_mem[num].size = ma->length;
> +				immovable_mem[num] = mem_vector_ctor(ma->base_address, ma->length);
>  				num++;
>  			}
>  
> diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
> index eca2acc65e2a..8c44041ae9cb 100644
> --- a/arch/x86/boot/compressed/kaslr.c
> +++ b/arch/x86/boot/compressed/kaslr.c
> @@ -98,7 +98,7 @@ static bool memmap_too_large;
>   * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
>   * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
>   */
> -static unsigned long long mem_limit;
> +static unsigned long mem_limit;
>  
>  /* Number of immovable memory regions */
>  static int num_immovable_mem;
> @@ -141,8 +141,7 @@ enum parse_mode {
>  };
>  
>  static int
> -parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
> -		enum parse_mode mode)
> +parse_memmap(char *p, u64 *start, u64 *size, enum parse_mode mode)
>  {
>  	char *oldp;
>  
> @@ -172,7 +171,7 @@ parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
>  			 */
>  			*size = 0;
>  		} else {
> -			unsigned long long flags;
> +			u64 flags;
>  
>  			/*
>  			 * efi_fake_mem=nn@ss:attr the attr specifies
> @@ -211,7 +210,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
>  
>  	while (str && (i < MAX_MEMMAP_REGIONS)) {
>  		int rc;
> -		unsigned long long start, size;
> +		u64 start, size;
>  		char *k = strchr(str, ',');
>  
>  		if (k)
> @@ -230,8 +229,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
>  			continue;
>  		}
>  
> -		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].start = start;
> -		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i].size = size;
> +		mem_avoid[MEM_AVOID_MEMMAP_BEGIN + i] = mem_vector_ctor(start, size);
>  		i++;
>  	}
>  
> @@ -422,8 +420,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
>  	initrd_start |= boot_params->hdr.ramdisk_image;
>  	initrd_size  = (u64)boot_params->ext_ramdisk_size << 32;
>  	initrd_size |= boot_params->hdr.ramdisk_size;
> -	mem_avoid[MEM_AVOID_INITRD].start = initrd_start;
> -	mem_avoid[MEM_AVOID_INITRD].size = initrd_size;
> +	mem_avoid[MEM_AVOID_INITRD] = mem_vector_ctor(initrd_start, initrd_size);
>  	/* No need to set mapping for initrd, it will be handled in VO. */
>  
>  	/* Avoid kernel command line. */
> @@ -673,7 +670,7 @@ static bool process_mem_region(struct mem_vector *region,
>  	 * immovable memory and @region.
>  	 */
>  	for (i = 0; i < num_immovable_mem; i++) {
> -		unsigned long long start, end, entry_end, region_end;
> +		unsigned long start, end, entry_end, region_end;
>  		struct mem_vector entry;
>  
>  		if (!mem_overlaps(region, &immovable_mem[i]))
> @@ -728,7 +725,7 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
>  	}
>  	pmap =  e->efi_memmap;
>  #else
> -	pmap = (e->efi_memmap | ((__u64)e->efi_memmap_hi << 32));
> +	pmap = (e->efi_memmap | ((u64)e->efi_memmap_hi << 32));
>  #endif
>  
>  	nr_desc = e->efi_memmap_size / e->efi_memdesc_size;
> @@ -765,8 +762,7 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
>  		    !(md->attribute & EFI_MEMORY_MORE_RELIABLE))
>  			continue;
>  
> -		region.start = md->phys_addr;
> -		region.size = md->num_pages << EFI_PAGE_SHIFT;
> +		region = mem_vector_ctor(md->phys_addr, md->num_pages << EFI_PAGE_SHIFT);
>  		if (process_mem_region(&region, minimum, image_size))
>  			break;
>  	}
> @@ -793,8 +789,7 @@ static void process_e820_entries(unsigned long minimum,
>  		/* Skip non-RAM entries. */
>  		if (entry->type != E820_TYPE_RAM)
>  			continue;
> -		region.start = entry->addr;
> -		region.size = entry->size;
> +		region = mem_vector_ctor(entry->addr, entry->size);
>  		if (process_mem_region(&region, minimum, image_size))
>  			break;
>  	}
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 726e264410ff..fd7c49ab0f02 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -70,10 +70,25 @@ int cmdline_find_option(const char *option, char *buffer, int bufsize);
>  int cmdline_find_option_bool(const char *option);
>  
>  struct mem_vector {
> -	unsigned long long start;
> -	unsigned long long size;
> +	unsigned long start;
> +	unsigned long size;
>  };
>  
> +static inline
> +struct mem_vector mem_vector_ctor(u64 start, u64 size)
> +{
> +	u64 end = start + size;
> +	struct mem_vector ret;
> +
> +	start = min_t(u64, start, ULONG_MAX);
> +	end   = min_t(u64, end,   ULONG_MAX);
> +
> +	ret.start = (unsigned long)start;
> +	ret.size  = (unsigned long)(end - start);
> +
> +	return ret;
> +}
> +
>  #if CONFIG_RANDOMIZE_BASE
>  /* kaslr.c */
>  void choose_random_location(unsigned long input,
> -- 
> 2.26.2
> 

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 5/8] x86/kaslr: Simplify __process_mem_region
  2020-07-28 19:25   ` Kees Cook
@ 2020-07-28 19:42     ` Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 19:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Arvind Sankar, x86, linux-kernel

On Tue, Jul 28, 2020 at 12:25:16PM -0700, Kees Cook wrote:
> On Mon, Jul 27, 2020 at 07:07:58PM -0400, Arvind Sankar wrote:
> > Get rid of unnecessary temporary variables and redundant tests in
> > __process_mem_region.
> > 
> > Fix one minor bug: in case of an overlap, the beginning of the region
> > should be used even if it is exactly image_size, not just strictly
> > larger.
> > 
> > Change type of minimum/image_size arguments in process_mem_region to
> > unsigned long. These actually can never be above 4G (even on x86_64),
> > and they're unsigned long in every other function except this one.
> > 
> > Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
> 
> Please split this up (which I think is already planned):
> 
> - bug fix (which looks like a correct fix to me)
> - arg type size changes
> - refactoring
> 
> I don't currently agree that the refactoring makes things easier to
> read, but let's see v3. :)
> 
> -- 
> Kees Cook

Yep

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages
  2020-07-28 19:27   ` Kees Cook
@ 2020-07-28 19:45     ` Arvind Sankar
  2020-07-28 20:13       ` Kees Cook
  0 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 19:45 UTC (permalink / raw)
  To: Kees Cook; +Cc: Arvind Sankar, x86, linux-kernel

On Tue, Jul 28, 2020 at 12:27:17PM -0700, Kees Cook wrote:
> On Mon, Jul 27, 2020 at 07:07:59PM -0400, Arvind Sankar wrote:
> > Short-circuit the whole function on 32-bit.
> > 
> > Replace the loop to determine the number of 1Gb pages with arithmetic.
> > 
> > Fix one minor bug: if the end of the region is aligned on a 1Gb
> > boundary, the current code will not use the last available 1Gb page due
> > to an off-by-one error.
> > 
> > Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
> 
> Can you add some KUnit tests could be written to do validation of the
> refactorings? Touching this code is so painful. :)
> 
> -Kees

Can I try to do that later -- I've never written a KUnit test, though
it's probably a good opportunity to learn how to do one.

> 
> > +++ b/arch/x86/boot/compressed/kaslr.c
> > @@ -546,49 +546,43 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
> >  static void
> >  process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
> >  {
> > -	unsigned long addr, size = 0;
> > +	unsigned long pud_start, pud_end, gb_huge_pages;
> >  	struct mem_vector tmp;
> > -	int i = 0;
> >  
> > -	if (!max_gb_huge_pages) {
> > +	if (IS_ENABLED(CONFIG_X86_32) || !max_gb_huge_pages) {
> >  		store_slot_info(region, image_size);
> >  		return;
> >  	}
> 
> Won't max_gb_huge_pages always be false for 32-bit?
> 
> -- 
> Kees Cook

It will, assuming someone doesn't pass bogus command-line arguments to
reserve Gb pages on 32-bit.

But the IS_ENABLED check allows the compiler to eliminate the entire
function at compile time.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 7/8] x86/kaslr: Clean up slot handling
  2020-07-28 19:34   ` Kees Cook
@ 2020-07-28 20:04     ` Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 20:04 UTC (permalink / raw)
  To: Kees Cook; +Cc: Arvind Sankar, x86, linux-kernel

On Tue, Jul 28, 2020 at 12:34:45PM -0700, Kees Cook wrote:
> On Mon, Jul 27, 2020 at 07:08:00PM -0400, Arvind Sankar wrote:
> > The number of slots and slot areas can be unsigned int, since on 64-bit,
> > the maximum amount of memory is 2^52, the minimum alignment is 2^21, so
> > the slot number cannot be greater than 2^31. The slot areas are limited
> > by MAX_SLOT_AREA, currently 100. Replace the type used for slot number,
> > which is currently a mix of int and unsigned long, with unsigned int
> > consistently.
> 
> I think it's good to standardize the type, but let's go to unsigned long
> then we don't have to think about this again in the future.

Ok, I can do that instead.

> 
> > Drop unnecessary check that number of slots is not zero in
> > store_slot_info, it's guaranteed to be at least 1 by the calculation.
> > 
> > Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in
> > find_random_virt_addr, it cannot change the result: the largest valid
> > slot is the largest n that satisfies
> 
> I view all of these things as robustness checks. It doesn't hurt to do
> these checks and it'll avoid crashing into problems if future
> refactoring breaks assumptions.

Well, at least the first one should really be unnecessary: the previous
line sets it as 1 + x. When I see that it actually confuses me: I think
I must be missing some edge case where it could be zero.

The second one is also unnecessary, but I agree it might require a bit
of analysis to see that it is.

> > -	slots = (KERNEL_IMAGE_SIZE - minimum - image_size) /
> > -		 CONFIG_PHYSICAL_ALIGN + 1;
> > +	slots = 1 + (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN;
> 
> These are the same -- why change the code?
> 

It's just reformatting now that we can have more than 80-column lines. I
think it's clearer this way, more obvious that you're dividing by
CONFIG_PHYSICAL_ALIGN and adding one, rather than "did he forget the
parentheses here".

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel
  2020-07-28 19:37   ` Kees Cook
@ 2020-07-28 20:08     ` Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 20:08 UTC (permalink / raw)
  To: Kees Cook; +Cc: Arvind Sankar, x86, linux-kernel

On Tue, Jul 28, 2020 at 12:37:12PM -0700, Kees Cook wrote:
> On Mon, Jul 27, 2020 at 07:08:01PM -0400, Arvind Sankar wrote:
> > Commit
> >   f28442497b5c ("x86/boot: Fix KASLR and memmap= collision")
> > converted mem_vector type to use 64-bit on the 32-bit kernel as well,
> > based on Thomas's review [0]. However:
> > - the code still doesn't consistently use 64-bit types. For instance,
> >   mem_avoid_overlap uses 32-bit types when checking for overlaps.  This
> >   is actually okay, as the passed in memory range will have been clipped
> >   to below 4G, but it is difficult to reason about the safety of the
> >   code.
> > - KASLR on 32-bit can't use memory above 4G anyway, so it's pointless
> >   to keep track of ranges above 4G.
> > 
> > Switch the type back to unsigned long, and use a helper function to clip
> > regions to 4G on 32-bit, when storing mem_avoid, immovable_mem, EFI,
> > E820 and command-line memory regions.
> 
> The reason for long long is to avoid having to check for overflows in
> any of the offset calculations. Why not just leave this as-is?
> 

The first bullet: there are still unsigned long's in the code that get
assigned mem_vector.start etc. Taking into account Ingo's review as
well, I'm planning to just make all those u64 as well in v3, instead of
this patch.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages
  2020-07-28 19:45     ` Arvind Sankar
@ 2020-07-28 20:13       ` Kees Cook
  0 siblings, 0 replies; 91+ messages in thread
From: Kees Cook @ 2020-07-28 20:13 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: x86, linux-kernel

On Tue, Jul 28, 2020 at 03:45:11PM -0400, Arvind Sankar wrote:
> But the IS_ENABLED check allows the compiler to eliminate the entire
> function at compile time.

Ah, I thought it'd be a const false, which would do the same...

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
  2020-07-28 11:06   ` Ingo Molnar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-07-30 18:02     ` Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 01/21] x86/kaslr: Make command line handling safer Arvind Sankar
                     ` (20 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

v2->v3:
- Fix the first patch: command line size should be strlen + 1 to account
  for terminating NUL. Avoid calling add_identity_map if cmdline was
  NULL, though it should do nothing in that case anyway.
- Add a patch to drop the check to see if there are arguments we care
  about before parsing the command line.
- Split up patches (Ingo/Kees).
- The gb_huge_pages change now also gets rid of parsing the argument
  altogether for 32-bit.
- Slot number is changed to unsigned long instead of unsigned int
  (Kees).
- Make everything u64 instead of trying to use 32-bit on x86-32, and add
  check in find_random_phys_addr to make sure the address is within the
  required range (Ingo/Kees).

v1->v2:
- Fix a bug in the bugfix 5/8: overlap.start can be smaller than
  region.start, so shouldn't subtract before comparing.


Arvind Sankar (21):
  x86/kaslr: Make command line handling safer
  x86/kaslr: Remove bogus warning and unnecessary goto
  x86/kaslr: Fix process_efi_entries comment
  x86/kaslr: Initialize mem_limit to the real maximum address
  x86/kaslr: Fix off-by-one error in __process_mem_region
  x86/kaslr: Drop redundant cur_entry from __process_mem_region
  x86/kaslr: Eliminate start_orig from __process_mem_region
  x86/kaslr: Drop redundant variable in __process_mem_region
  x86/kaslr: Drop some redundant checks from __process_mem_region
  x86/kaslr: Fix off-by-one error in process_gb_huge_pages
  x86/kaslr: Short-circuit gb_huge_pages on x86-32
  x86/kaslr: Simplify process_gb_huge_pages
  x86/kaslr: Drop test for command-line parameters before parsing
  x86/kaslr: Make the type of number of slots/slot areas consistent
  x86/kaslr: Drop redundant check in store_slot_info
  x86/kaslr: Drop unnecessary alignment in find_random_virt_addr
  x86/kaslr: Small cleanup of find_random_phys_addr
  x86/kaslr: Make minimum/image_size unsigned long
  x86/kaslr: Replace unsigned long long with u64
  x86/kaslr: Make local variables 64-bit
  x86/kaslr: Add a check that the random address is in range

 arch/x86/boot/compressed/kaslr.c | 233 +++++++++++++------------------
 arch/x86/boot/compressed/misc.h  |   4 +-
 2 files changed, 102 insertions(+), 135 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [PATCH v3 01/21] x86/kaslr: Make command line handling safer
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
  2020-07-28 11:06   ` Ingo Molnar
  2020-07-28 22:57   ` [PATCH v3 00/21] " Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 02/21] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
                     ` (19 subsequent siblings)
  22 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Handle the possibility that the command line is NULL.

Replace open-coded strlen with a function call.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d7408af55738..e0f69f3625ae 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -268,15 +268,19 @@ static void parse_gb_huge_pages(char *param, char *val)
 static void handle_mem_options(void)
 {
 	char *args = (char *)get_cmd_line_ptr();
-	size_t len = strlen((char *)args);
+	size_t len;
 	char *tmp_cmdline;
 	char *param, *val;
 	u64 mem_size;
 
+	if (!args)
+		return;
+
 	if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
 		!strstr(args, "hugepages"))
 		return;
 
+	len = strlen(args);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
 		error("Failed to allocate space for tmp_cmdline");
@@ -399,8 +403,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 {
 	unsigned long init_size = boot_params->hdr.init_size;
 	u64 initrd_start, initrd_size;
-	u64 cmd_line, cmd_line_size;
-	char *ptr;
+	unsigned long cmd_line, cmd_line_size;
 
 	/*
 	 * Avoid the region that is unsafe to overlap during
@@ -421,16 +424,15 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	/* No need to set mapping for initrd, it will be handled in VO. */
 
 	/* Avoid kernel command line. */
-	cmd_line  = (u64)boot_params->ext_cmd_line_ptr << 32;
-	cmd_line |= boot_params->hdr.cmd_line_ptr;
+	cmd_line = get_cmd_line_ptr();
 	/* Calculate size of cmd_line. */
-	ptr = (char *)(unsigned long)cmd_line;
-	for (cmd_line_size = 0; ptr[cmd_line_size++];)
-		;
-	mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
-	mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
-	add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
-			 mem_avoid[MEM_AVOID_CMDLINE].size);
+	if (cmd_line) {
+		cmd_line_size = strlen((char *)cmd_line) + 1;
+		mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
+		mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
+		add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
+				 mem_avoid[MEM_AVOID_CMDLINE].size);
+	}
 
 	/* Avoid boot parameters. */
 	mem_avoid[MEM_AVOID_BOOTPARAMS].start = (unsigned long)boot_params;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 02/21] x86/kaslr: Remove bogus warning and unnecessary goto
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (2 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 01/21] x86/kaslr: Make command line handling safer Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 03/21] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
                     ` (18 subsequent siblings)
  22 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Drop the warning on seeing "--" in handle_mem_options. This will trigger
whenever one of the memory options is present in the command line
together with "--", but there's no problem if that is the case.

Replace goto with break.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index e0f69f3625ae..c31f3a5ab9e4 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -295,10 +295,8 @@ static void handle_mem_options(void)
 	while (*args) {
 		args = next_arg(args, &param, &val);
 		/* Stop at -- */
-		if (!val && strcmp(param, "--") == 0) {
-			warn("Only '--' specified in cmdline");
-			goto out;
-		}
+		if (!val && strcmp(param, "--") == 0)
+			break;
 
 		if (!strcmp(param, "memmap")) {
 			mem_avoid_memmap(PARSE_MEMMAP, val);
@@ -311,7 +309,7 @@ static void handle_mem_options(void)
 				continue;
 			mem_size = memparse(p, &p);
 			if (mem_size == 0)
-				goto out;
+				break;
 
 			mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
@@ -319,7 +317,6 @@ static void handle_mem_options(void)
 		}
 	}
 
-out:
 	free(tmp_cmdline);
 	return;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 03/21] x86/kaslr: Fix process_efi_entries comment
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (3 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 02/21] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 04/21] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
                     ` (17 subsequent siblings)
  22 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Since commit
  0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice")
process_efi_entries will return true if we have an EFI memmap, not just
if it contained EFI_MEMORY_MORE_RELIABLE regions.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index c31f3a5ab9e4..1ab67a84a781 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -742,8 +742,8 @@ static bool process_mem_region(struct mem_vector *region,
 
 #ifdef CONFIG_EFI
 /*
- * Returns true if mirror region found (and must have been processed
- * for slots adding)
+ * Returns true if we processed the EFI memmap, which we prefer over the E820
+ * table if it is available.
  */
 static bool
 process_efi_entries(unsigned long minimum, unsigned long image_size)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 04/21] x86/kaslr: Initialize mem_limit to the real maximum address
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (4 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 03/21] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 05/21] x86/kaslr: Fix off-by-one error in __process_mem_region Arvind Sankar
                     ` (16 subsequent siblings)
  22 siblings, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level
paging or 4PiB with 5-level paging). This is currently not enforced by
KASLR, which thus implicitly relies on physical memory being limited to
less than 64TiB.

On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by
special checks in __process_mem_region.

Initialize mem_limit to the maximum (depending on architecture), instead
of ULLONG_MAX, and make sure the command-line arguments can only
decrease it. This makes the enforcement explicit on 64-bit, and
eliminates the 32-bit specific checks to keep the kernel below 512M.

Check upfront to make sure the minimum address is below the limit before
doing any work.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 41 +++++++++++++++++---------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 1ab67a84a781..da45e66cb6e4 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -94,8 +94,11 @@ static unsigned long get_boot_seed(void)
 static bool memmap_too_large;
 
 
-/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
-static unsigned long long mem_limit = ULLONG_MAX;
+/*
+ * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
+ * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
+ */
+static unsigned long long mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -221,7 +224,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 		if (start == 0) {
 			/* Store the specified memory limit if size > 0 */
-			if (size > 0)
+			if (size > 0 && size < mem_limit)
 				mem_limit = size;
 
 			continue;
@@ -311,7 +314,8 @@ static void handle_mem_options(void)
 			if (mem_size == 0)
 				break;
 
-			mem_limit = mem_size;
+			if (mem_size < mem_limit)
+				mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
 			mem_avoid_memmap(PARSE_EFI, val);
 		}
@@ -322,7 +326,9 @@ static void handle_mem_options(void)
 }
 
 /*
- * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
+ * In theory, KASLR can put the kernel anywhere in the range of [16M, MAXMEM)
+ * on 64-bit, and [16M, KERNEL_IMAGE_SIZE) on 32-bit.
+ *
  * The mem_avoid array is used to store the ranges that need to be avoided
  * when KASLR searches for an appropriate random address. We must avoid any
  * regions that are unsafe to overlap with during decompression, and other
@@ -620,10 +626,6 @@ static void __process_mem_region(struct mem_vector *entry,
 	unsigned long start_orig, end;
 	struct mem_vector cur_entry;
 
-	/* On 32-bit, ignore entries entirely above our maximum. */
-	if (IS_ENABLED(CONFIG_X86_32) && entry->start >= KERNEL_IMAGE_SIZE)
-		return;
-
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
 		return;
@@ -656,11 +658,6 @@ static void __process_mem_region(struct mem_vector *entry,
 		/* Reduce size by any delta from the original address. */
 		region.size -= region.start - start_orig;
 
-		/* On 32-bit, reduce region size to fit within max size. */
-		if (IS_ENABLED(CONFIG_X86_32) &&
-		    region.start + region.size > KERNEL_IMAGE_SIZE)
-			region.size = KERNEL_IMAGE_SIZE - region.start;
-
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
 			return;
@@ -845,15 +842,16 @@ static void process_e820_entries(unsigned long minimum,
 static unsigned long find_random_phys_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
+	/* Bail out early if it's impossible to succeed. */
+	if (minimum + image_size > mem_limit)
+		return 0;
+
 	/* Check if we had too many memmaps. */
 	if (memmap_too_large) {
 		debug_putstr("Aborted memory entries scan (more than 4 memmap= args)!\n");
 		return 0;
 	}
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
-
 	if (process_efi_entries(minimum, image_size))
 		return slots_fetch_random();
 
@@ -866,8 +864,6 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
 {
 	unsigned long slots, random_addr;
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
 	/* Align image_size for easy slot calculations. */
 	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
 
@@ -914,6 +910,11 @@ void choose_random_location(unsigned long input,
 	/* Prepare to add new identity pagetables on demand. */
 	initialize_identity_maps();
 
+	if (IS_ENABLED(CONFIG_X86_32))
+		mem_limit = KERNEL_IMAGE_SIZE;
+	else
+		mem_limit = MAXMEM;
+
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init(input, input_size, *output);
 
@@ -923,6 +924,8 @@ void choose_random_location(unsigned long input,
 	 * location:
 	 */
 	min_addr = min(*output, 512UL << 20);
+	/* Make sure minimum is aligned. */
+	min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
 
 	/* Walk available memory entries to find a random address. */
 	random_addr = find_random_phys_addr(min_addr, output_size);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 05/21] x86/kaslr: Fix off-by-one error in __process_mem_region
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (5 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 04/21] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:39     ` [tip: x86/kaslr] x86/kaslr: Fix off-by-one error in __process_mem_region() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 06/21] x86/kaslr: Drop redundant cur_entry from __process_mem_region Arvind Sankar
                     ` (15 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

In case of an overlap, the beginning of the region should be used even
if it is exactly image_size, not just strictly larger.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index da45e66cb6e4..848346fc0dbb 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -669,7 +669,7 @@ static void __process_mem_region(struct mem_vector *entry,
 		}
 
 		/* Store beginning of region if holds at least image_size. */
-		if (overlap.start > region.start + image_size) {
+		if (overlap.start >= region.start + image_size) {
 			struct mem_vector beginning;
 
 			beginning.start = region.start;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 06/21] x86/kaslr: Drop redundant cur_entry from __process_mem_region
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (6 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 05/21] x86/kaslr: Fix off-by-one error in __process_mem_region Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:39     ` [tip: x86/kaslr] x86/kaslr: Drop redundant cur_entry from __process_mem_region() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 07/21] x86/kaslr: Eliminate start_orig from __process_mem_region Arvind Sankar
                     ` (14 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

cur_entry is only used as cur_entry.start + cur_entry.size, which is
always equal to end.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 848346fc0dbb..f2454eef5790 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -624,7 +624,6 @@ static void __process_mem_region(struct mem_vector *entry,
 {
 	struct mem_vector region, overlap;
 	unsigned long start_orig, end;
-	struct mem_vector cur_entry;
 
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
@@ -634,11 +633,9 @@ static void __process_mem_region(struct mem_vector *entry,
 	end = min(entry->size + entry->start, mem_limit);
 	if (entry->start >= end)
 		return;
-	cur_entry.start = entry->start;
-	cur_entry.size = end - entry->start;
 
-	region.start = cur_entry.start;
-	region.size = cur_entry.size;
+	region.start = entry->start;
+	region.size = end - entry->start;
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
@@ -652,7 +649,7 @@ static void __process_mem_region(struct mem_vector *entry,
 		region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
 
 		/* Did we raise the address above the passed in memory entry? */
-		if (region.start > cur_entry.start + cur_entry.size)
+		if (region.start > end)
 			return;
 
 		/* Reduce size by any delta from the original address. */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 07/21] x86/kaslr: Eliminate start_orig from __process_mem_region
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (7 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 06/21] x86/kaslr: Drop redundant cur_entry from __process_mem_region Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Eliminate 'start_orig' local variable from __process_mem_region() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 08/21] x86/kaslr: Drop redundant variable in __process_mem_region Arvind Sankar
                     ` (13 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Set the region.size within the loop, which removes the need for
start_orig.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index f2454eef5790..e978c3508814 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -623,7 +623,7 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long start_orig, end;
+	unsigned long end;
 
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
@@ -635,12 +635,9 @@ static void __process_mem_region(struct mem_vector *entry,
 		return;
 
 	region.start = entry->start;
-	region.size = end - entry->start;
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
-		start_orig = region.start;
-
 		/* Potentially raise address to minimum location. */
 		if (region.start < minimum)
 			region.start = minimum;
@@ -653,7 +650,7 @@ static void __process_mem_region(struct mem_vector *entry,
 			return;
 
 		/* Reduce size by any delta from the original address. */
-		region.size -= region.start - start_orig;
+		region.size = end - region.start;
 
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
@@ -679,7 +676,6 @@ static void __process_mem_region(struct mem_vector *entry,
 			return;
 
 		/* Clip off the overlapping region and start over. */
-		region.size -= overlap.start - region.start + overlap.size;
 		region.start = overlap.start + overlap.size;
 	}
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 08/21] x86/kaslr: Drop redundant variable in __process_mem_region
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (8 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 07/21] x86/kaslr: Eliminate start_orig from __process_mem_region Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop redundant variable in __process_mem_region() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 09/21] x86/kaslr: Drop some redundant checks from __process_mem_region Arvind Sankar
                     ` (12 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

region.size can be trimmed to store the portion of the region before the
overlap, instead of a separate mem_vector variable.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index e978c3508814..8cc47faea56d 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -664,11 +664,8 @@ static void __process_mem_region(struct mem_vector *entry,
 
 		/* Store beginning of region if holds at least image_size. */
 		if (overlap.start >= region.start + image_size) {
-			struct mem_vector beginning;
-
-			beginning.start = region.start;
-			beginning.size = overlap.start - region.start;
-			process_gb_huge_pages(&beginning, image_size);
+			region.size = overlap.start - region.start;
+			process_gb_huge_pages(&region, image_size);
 		}
 
 		/* Return if overlap extends to or past end of region. */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 09/21] x86/kaslr: Drop some redundant checks from __process_mem_region
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (9 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 08/21] x86/kaslr: Drop redundant variable in __process_mem_region Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop some redundant checks from __process_mem_region() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 10/21] x86/kaslr: Fix off-by-one error in process_gb_huge_pages Arvind Sankar
                     ` (11 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Clip the start and end of the region to minimum and mem_limit prior to
the loop. region.start can only increase during the loop, so raising it
to minimum before the loop is enough.

A region that becomes empty due to this will get checked in
the first iteration of the loop.

Drop the check for overlap extending beyond the end of the region. This
will get checked in the next loop iteration anyway.

Rename end to region_end for symmetry with region.start.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 27 ++++++---------------------
 1 file changed, 6 insertions(+), 21 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 8cc47faea56d..d074986e8061 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -623,34 +623,23 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long end;
+	unsigned long region_end;
 
-	/* Ignore entries entirely below our minimum. */
-	if (entry->start + entry->size < minimum)
-		return;
-
-	/* Ignore entries above memory limit */
-	end = min(entry->size + entry->start, mem_limit);
-	if (entry->start >= end)
-		return;
-
-	region.start = entry->start;
+	/* Enforce minimum and memory limit. */
+	region.start = max_t(unsigned long long, entry->start, minimum);
+	region_end = min(entry->start + entry->size, mem_limit);
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
-		/* Potentially raise address to minimum location. */
-		if (region.start < minimum)
-			region.start = minimum;
-
 		/* Potentially raise address to meet alignment needs. */
 		region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
 
 		/* Did we raise the address above the passed in memory entry? */
-		if (region.start > end)
+		if (region.start > region_end)
 			return;
 
 		/* Reduce size by any delta from the original address. */
-		region.size = end - region.start;
+		region.size = region_end - region.start;
 
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
@@ -668,10 +657,6 @@ static void __process_mem_region(struct mem_vector *entry,
 			process_gb_huge_pages(&region, image_size);
 		}
 
-		/* Return if overlap extends to or past end of region. */
-		if (overlap.start + overlap.size >= region.start + region.size)
-			return;
-
 		/* Clip off the overlapping region and start over. */
 		region.start = overlap.start + overlap.size;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 10/21] x86/kaslr: Fix off-by-one error in process_gb_huge_pages
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (10 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 09/21] x86/kaslr: Drop some redundant checks from __process_mem_region Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Fix off-by-one error in process_gb_huge_pages() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 11/21] x86/kaslr: Short-circuit gb_huge_pages on x86-32 Arvind Sankar
                     ` (10 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

If the remaining size of the region is exactly 1Gb, there is still one
hugepage that can be reserved.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d074986e8061..0df513e3e2ce 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -562,7 +562,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 		size = region->size - (addr - region->start);
 
 	/* Check how many 1GB huge pages can be filtered out: */
-	while (size > PUD_SIZE && max_gb_huge_pages) {
+	while (size >= PUD_SIZE && max_gb_huge_pages) {
 		size -= PUD_SIZE;
 		max_gb_huge_pages--;
 		i++;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 11/21] x86/kaslr: Short-circuit gb_huge_pages on x86-32
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (11 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 10/21] x86/kaslr: Fix off-by-one error in process_gb_huge_pages Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 12/21] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
                     ` (9 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

32-bit does not have GB pages, so don't bother checking for them. Using
the IS_ENABLED allows the compiler to completely remove the
gb_huge_pages code.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 0df513e3e2ce..3727e9708690 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -303,7 +303,7 @@ static void handle_mem_options(void)
 
 		if (!strcmp(param, "memmap")) {
 			mem_avoid_memmap(PARSE_MEMMAP, val);
-		} else if (strstr(param, "hugepages")) {
+		} else if (IS_ENABLED(CONFIG_X86_64) && strstr(param, "hugepages")) {
 			parse_gb_huge_pages(param, val);
 		} else if (!strcmp(param, "mem")) {
 			char *p = val;
@@ -551,7 +551,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 	struct mem_vector tmp;
 	int i = 0;
 
-	if (!max_gb_huge_pages) {
+	if (!IS_ENABLED(CONFIG_X86_64) || !max_gb_huge_pages) {
 		store_slot_info(region, image_size);
 		return;
 	}
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 12/21] x86/kaslr: Simplify process_gb_huge_pages
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (12 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 11/21] x86/kaslr: Short-circuit gb_huge_pages on x86-32 Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Simplify process_gb_huge_pages() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 13/21] x86/kaslr: Drop test for command-line parameters before parsing Arvind Sankar
                     ` (8 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Replace the loop to determine the number of 1Gb pages with arithmetic.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 46 ++++++++++++++------------------
 1 file changed, 20 insertions(+), 26 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 3727e9708690..7fb699aae74e 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -547,49 +547,43 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 static void
 process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 {
-	unsigned long addr, size = 0;
+	unsigned long pud_start, pud_end, gb_huge_pages;
 	struct mem_vector tmp;
-	int i = 0;
 
 	if (!IS_ENABLED(CONFIG_X86_64) || !max_gb_huge_pages) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	addr = ALIGN(region->start, PUD_SIZE);
-	/* Did we raise the address above the passed in memory entry? */
-	if (addr < region->start + region->size)
-		size = region->size - (addr - region->start);
-
-	/* Check how many 1GB huge pages can be filtered out: */
-	while (size >= PUD_SIZE && max_gb_huge_pages) {
-		size -= PUD_SIZE;
-		max_gb_huge_pages--;
-		i++;
-	}
+	/* Are there any 1GB pages in the region? */
+	pud_start = ALIGN(region->start, PUD_SIZE);
+	pud_end = ALIGN_DOWN(region->start + region->size, PUD_SIZE);
 
 	/* No good 1GB huge pages found: */
-	if (!i) {
+	if (pud_start >= pud_end) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	/*
-	 * Skip those 'i'*1GB good huge pages, and continue checking and
-	 * processing the remaining head or tail part of the passed region
-	 * if available.
-	 */
-
-	if (addr >= region->start + image_size) {
+	/* Check if the head part of the region is usable. */
+	if (pud_start >= region->start + image_size) {
 		tmp.start = region->start;
-		tmp.size = addr - region->start;
+		tmp.size = pud_start - region->start;
 		store_slot_info(&tmp, image_size);
 	}
 
-	size  = region->size - (addr - region->start) - i * PUD_SIZE;
-	if (size >= image_size) {
-		tmp.start = addr + i * PUD_SIZE;
-		tmp.size = size;
+	/* Skip the good 1GB pages. */
+	gb_huge_pages = (pud_end - pud_start) >> PUD_SHIFT;
+	if (gb_huge_pages > max_gb_huge_pages) {
+		pud_end = pud_start + (max_gb_huge_pages << PUD_SHIFT);
+		max_gb_huge_pages = 0;
+	} else
+		max_gb_huge_pages -= gb_huge_pages;
+
+	/* Check if the tail part of the region is usable. */
+	if (region->start + region->size >= pud_end + image_size) {
+		tmp.start = pud_end;
+		tmp.size = region->start + region->size - pud_end;
 		store_slot_info(&tmp, image_size);
 	}
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 13/21] x86/kaslr: Drop test for command-line parameters before parsing
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (13 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 12/21] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 14/21] x86/kaslr: Make the type of number of slots/slot areas consistent Arvind Sankar
                     ` (7 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

This check doesn't save anything. In the case when none of the
parameters are present, each strstr will scan args twice (once to find
the length and then for searching), six scans in total. Just going ahead
and parsing the arguments only requires three scans: strlen, memcpy, and
parsing. This will be the first malloc, so free will actually free up
the memory, so the check doesn't save heap space either.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 7fb699aae74e..7c69fc10a782 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -279,10 +279,6 @@ static void handle_mem_options(void)
 	if (!args)
 		return;
 
-	if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
-		!strstr(args, "hugepages"))
-		return;
-
 	len = strlen(args);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 14/21] x86/kaslr: Make the type of number of slots/slot areas consistent
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (14 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 13/21] x86/kaslr: Drop test for command-line parameters before parsing Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 15/21] x86/kaslr: Drop redundant check in store_slot_info Arvind Sankar
                     ` (6 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

The number of slots can be unsigned int, since on 64-bit, the maximum
amount of memory is 2^52, the minimum alignment is 2^21, so the slot
number cannot be greater than 2^31. But in case future processors have
more than 52 physical address bits, make it unsigned long.

The slot areas are limited by MAX_SLOT_AREA, currently 100. It is
indexed by an int, but the number of areas is stored as unsigned long.
Change both to unsigned int for consistency.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 7c69fc10a782..a87800aa6eda 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -508,17 +508,15 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 
 struct slot_area {
 	unsigned long addr;
-	int num;
+	unsigned long num;
 };
 
 #define MAX_SLOT_AREA 100
 
 static struct slot_area slot_areas[MAX_SLOT_AREA];
-
+static unsigned int slot_area_index;
 static unsigned long slot_max;
 
-static unsigned long slot_area_index;
-
 static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 {
 	struct slot_area slot_area;
@@ -587,7 +585,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 static unsigned long slots_fetch_random(void)
 {
 	unsigned long slot;
-	int i;
+	unsigned int i;
 
 	/* Handle case of no slots stored. */
 	if (slot_max == 0)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 15/21] x86/kaslr: Drop redundant check in store_slot_info
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (15 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 14/21] x86/kaslr: Make the type of number of slots/slot areas consistent Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop redundant check in store_slot_info() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 16/21] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr Arvind Sankar
                     ` (5 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Drop unnecessary check that number of slots is not zero in
store_slot_info, it's guaranteed to be at least 1 by the calculation.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index a87800aa6eda..2f43b0d7051e 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -525,13 +525,10 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 		return;
 
 	slot_area.addr = region->start;
-	slot_area.num = (region->size - image_size) /
-			CONFIG_PHYSICAL_ALIGN + 1;
+	slot_area.num = 1 + (region->size - image_size) / CONFIG_PHYSICAL_ALIGN;
 
-	if (slot_area.num > 0) {
-		slot_areas[slot_area_index++] = slot_area;
-		slot_max += slot_area.num;
-	}
+	slot_areas[slot_area_index++] = slot_area;
+	slot_max += slot_area.num;
 }
 
 /*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 16/21] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (16 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 15/21] x86/kaslr: Drop redundant check in store_slot_info Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 17/21] x86/kaslr: Small cleanup of find_random_phys_addr Arvind Sankar
                     ` (4 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in
find_random_virt_addr, it cannot change the result: the largest valid
slot is the largest n that satisfies

  minimum + n * CONFIG_PHYSICAL_ALIGN + image_size <= KERNEL_IMAGE_SIZE

(since minimum is already aligned) and so n is equal to

  (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN

even if image_size is not aligned to CONFIG_PHYSICAL_ALIGN.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 2f43b0d7051e..4fd60eff048f 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -824,16 +824,12 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
 {
 	unsigned long slots, random_addr;
 
-	/* Align image_size for easy slot calculations. */
-	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
-
 	/*
 	 * There are how many CONFIG_PHYSICAL_ALIGN-sized slots
 	 * that can hold image_size within the range of minimum to
 	 * KERNEL_IMAGE_SIZE?
 	 */
-	slots = (KERNEL_IMAGE_SIZE - minimum - image_size) /
-		 CONFIG_PHYSICAL_ALIGN + 1;
+	slots = 1 + (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN;
 
 	random_addr = kaslr_get_random_long("Virtual") % slots;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 17/21] x86/kaslr: Small cleanup of find_random_phys_addr
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (17 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 16/21] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Small cleanup of find_random_phys_addr() tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 18/21] x86/kaslr: Make minimum/image_size unsigned long Arvind Sankar
                     ` (3 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Just a trivial rearrangement to do all the processing together, and only
have one call to slots_fetch_random in the source.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 4fd60eff048f..0e103d4d452e 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -812,10 +812,9 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
 		return 0;
 	}
 
-	if (process_efi_entries(minimum, image_size))
-		return slots_fetch_random();
+	if (!process_efi_entries(minimum, image_size))
+		process_e820_entries(minimum, image_size);
 
-	process_e820_entries(minimum, image_size);
 	return slots_fetch_random();
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 18/21] x86/kaslr: Make minimum/image_size unsigned long
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (18 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 17/21] x86/kaslr: Small cleanup of find_random_phys_addr Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Make minimum/image_size 'unsigned long' tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 19/21] x86/kaslr: Replace unsigned long long with u64 Arvind Sankar
                     ` (2 subsequent siblings)
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Change type of minimum/image_size arguments in process_mem_region to
unsigned long. These actually can never be above 4G (even on x86_64),
and they're unsigned long in every other function except this one.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 0e103d4d452e..b6ef77b644bd 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -648,8 +648,8 @@ static void __process_mem_region(struct mem_vector *entry,
 }
 
 static bool process_mem_region(struct mem_vector *region,
-			       unsigned long long minimum,
-			       unsigned long long image_size)
+			       unsigned long minimum,
+			       unsigned long image_size)
 {
 	int i;
 	/*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 19/21] x86/kaslr: Replace unsigned long long with u64
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (19 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 18/21] x86/kaslr: Make minimum/image_size unsigned long Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Replace 'unsigned long long' with 'u64' tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 20/21] x86/kaslr: Make local variables 64-bit Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 21/21] x86/kaslr: Add a check that the random address is in range Arvind Sankar
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

No functional change.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 13 ++++++-------
 arch/x86/boot/compressed/misc.h  |  4 ++--
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index b6ef77b644bd..c49d479245d0 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -98,7 +98,7 @@ static bool memmap_too_large;
  * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
  * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
  */
-static unsigned long long mem_limit;
+static u64 mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -141,8 +141,7 @@ enum parse_mode {
 };
 
 static int
-parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
-		enum parse_mode mode)
+parse_memmap(char *p, u64 *start, u64 *size, enum parse_mode mode)
 {
 	char *oldp;
 
@@ -172,7 +171,7 @@ parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
 			 */
 			*size = 0;
 		} else {
-			unsigned long long flags;
+			u64 flags;
 
 			/*
 			 * efi_fake_mem=nn@ss:attr the attr specifies
@@ -211,7 +210,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 	while (str && (i < MAX_MEMMAP_REGIONS)) {
 		int rc;
-		unsigned long long start, size;
+		u64 start, size;
 		char *k = strchr(str, ',');
 
 		if (k)
@@ -611,7 +610,7 @@ static void __process_mem_region(struct mem_vector *entry,
 	unsigned long region_end;
 
 	/* Enforce minimum and memory limit. */
-	region.start = max_t(unsigned long long, entry->start, minimum);
+	region.start = max_t(u64, entry->start, minimum);
 	region_end = min(entry->start + entry->size, mem_limit);
 
 	/* Give up if slot area array is full. */
@@ -672,7 +671,7 @@ static bool process_mem_region(struct mem_vector *region,
 	 * immovable memory and @region.
 	 */
 	for (i = 0; i < num_immovable_mem; i++) {
-		unsigned long long start, end, entry_end, region_end;
+		u64 start, end, entry_end, region_end;
 		struct mem_vector entry;
 
 		if (!mem_overlaps(region, &immovable_mem[i]))
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 726e264410ff..3efce27ba35c 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -70,8 +70,8 @@ int cmdline_find_option(const char *option, char *buffer, int bufsize);
 int cmdline_find_option_bool(const char *option);
 
 struct mem_vector {
-	unsigned long long start;
-	unsigned long long size;
+	u64 start;
+	u64 size;
 };
 
 #if CONFIG_RANDOMIZE_BASE
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 20/21] x86/kaslr: Make local variables 64-bit
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (20 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 19/21] x86/kaslr: Replace unsigned long long with u64 Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 22:57   ` [PATCH v3 21/21] x86/kaslr: Add a check that the random address is in range Arvind Sankar
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Change the type of local variables/fields that store mem_vector
addresses to u64 to make it less likely that 32-bit overflow will cause
issues on 32-bit.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index c49d479245d0..38ecbf2e61c5 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -461,7 +461,7 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 {
 	int i;
 	struct setup_data *ptr;
-	unsigned long earliest = img->start + img->size;
+	u64 earliest = img->start + img->size;
 	bool is_overlapping = false;
 
 	for (i = 0; i < MEM_AVOID_MAX; i++) {
@@ -506,7 +506,7 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 }
 
 struct slot_area {
-	unsigned long addr;
+	u64 addr;
 	unsigned long num;
 };
 
@@ -537,7 +537,8 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 static void
 process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 {
-	unsigned long pud_start, pud_end, gb_huge_pages;
+	u64 pud_start, pud_end;
+	unsigned long gb_huge_pages;
 	struct mem_vector tmp;
 
 	if (!IS_ENABLED(CONFIG_X86_64) || !max_gb_huge_pages) {
@@ -578,7 +579,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 	}
 }
 
-static unsigned long slots_fetch_random(void)
+static u64 slots_fetch_random(void)
 {
 	unsigned long slot;
 	unsigned int i;
@@ -594,7 +595,7 @@ static unsigned long slots_fetch_random(void)
 			slot -= slot_areas[i].num;
 			continue;
 		}
-		return slot_areas[i].addr + slot * CONFIG_PHYSICAL_ALIGN;
+		return slot_areas[i].addr + ((u64)slot * CONFIG_PHYSICAL_ALIGN);
 	}
 
 	if (i == slot_area_index)
@@ -607,7 +608,7 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long region_end;
+	u64 region_end;
 
 	/* Enforce minimum and memory limit. */
 	region.start = max_t(u64, entry->start, minimum);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [PATCH v3 21/21] x86/kaslr: Add a check that the random address is in range
  2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
                     ` (21 preceding siblings ...)
  2020-07-28 22:57   ` [PATCH v3 20/21] x86/kaslr: Make local variables 64-bit Arvind Sankar
@ 2020-07-28 22:57   ` Arvind Sankar
  2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  22 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-28 22:57 UTC (permalink / raw)
  To: Kees Cook, x86; +Cc: linux-kernel

Check in find_random_phys_addr that the chosen address is inside the
range that was required.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 38ecbf2e61c5..903ccdca0551 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -802,6 +802,8 @@ static void process_e820_entries(unsigned long minimum,
 static unsigned long find_random_phys_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
+	u64 phys_addr;
+
 	/* Bail out early if it's impossible to succeed. */
 	if (minimum + image_size > mem_limit)
 		return 0;
@@ -815,7 +817,15 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
 	if (!process_efi_entries(minimum, image_size))
 		process_e820_entries(minimum, image_size);
 
-	return slots_fetch_random();
+	phys_addr = slots_fetch_random();
+
+	/* Perform a final check to make sure the address is in range. */
+	if (phys_addr < minimum || phys_addr + image_size > mem_limit) {
+		warn("Invalid physical address chosen!\n");
+		return 0;
+	}
+
+	return (unsigned long)phys_addr;
 }
 
 static unsigned long find_random_virt_addr(unsigned long minimum,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* Re: [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes
  2020-07-28 22:57   ` [PATCH v3 00/21] " Arvind Sankar
@ 2020-07-30 18:02     ` Arvind Sankar
  2020-07-31  9:21       ` Ingo Molnar
  0 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-07-30 18:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Kees Cook, x86, linux-kernel

On Tue, Jul 28, 2020 at 06:57:01PM -0400, Arvind Sankar wrote:
> v2->v3:
> - Fix the first patch: command line size should be strlen + 1 to account
>   for terminating NUL. Avoid calling add_identity_map if cmdline was
>   NULL, though it should do nothing in that case anyway.

Hi Ingo, I noticed that WIP.x86/kaslr and x86/kaslr both have the v2
version of the first patch. That has a bug in the cmd_line_size
calculation (missing the +1).

Thanks.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes
  2020-07-30 18:02     ` Arvind Sankar
@ 2020-07-31  9:21       ` Ingo Molnar
  2020-07-31 23:33         ` Kees Cook
  0 siblings, 1 reply; 91+ messages in thread
From: Ingo Molnar @ 2020-07-31  9:21 UTC (permalink / raw)
  To: Arvind Sankar; +Cc: Kees Cook, x86, linux-kernel


* Arvind Sankar <nivedita@alum.mit.edu> wrote:

> On Tue, Jul 28, 2020 at 06:57:01PM -0400, Arvind Sankar wrote:
> > v2->v3:
> > - Fix the first patch: command line size should be strlen + 1 to account
> >   for terminating NUL. Avoid calling add_identity_map if cmdline was
> >   NULL, though it should do nothing in that case anyway.
> 
> Hi Ingo, I noticed that WIP.x86/kaslr and x86/kaslr both have the v2
> version of the first patch. That has a bug in the cmd_line_size
> calculation (missing the +1).

Indeed, well spotted. I rebased the affected 4 patches in x86/kaslr 
and used the opportunity to add Kees's Reviewed-by to the first 4 
patches as well.

I've zapped tip:x86/kaslr for now and put the whole series into 
tip:WIP.x86/kaslr, will move it into tip:x86/kaslr for a v5.9 merge 
once Kees is happy with the latest version.

Kees, AFAICS your type truncation and patch split-up review 
suggestions were resolved in v3?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes
  2020-07-31  9:21       ` Ingo Molnar
@ 2020-07-31 23:33         ` Kees Cook
  2020-08-03  1:15           ` [PATCH 0/1] x86/kaslr: Replace strlen with strnlen Arvind Sankar
  2020-08-14 22:47           ` [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
  0 siblings, 2 replies; 91+ messages in thread
From: Kees Cook @ 2020-07-31 23:33 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: Arvind Sankar, x86, linux-kernel

On Fri, Jul 31, 2020 at 11:21:46AM +0200, Ingo Molnar wrote:
> 
> * Arvind Sankar <nivedita@alum.mit.edu> wrote:
> 
> > On Tue, Jul 28, 2020 at 06:57:01PM -0400, Arvind Sankar wrote:
> > > v2->v3:
> > > - Fix the first patch: command line size should be strlen + 1 to account
> > >   for terminating NUL. Avoid calling add_identity_map if cmdline was
> > >   NULL, though it should do nothing in that case anyway.
> > 
> > Hi Ingo, I noticed that WIP.x86/kaslr and x86/kaslr both have the v2
> > version of the first patch. That has a bug in the cmd_line_size
> > calculation (missing the +1).
> 
> Indeed, well spotted. I rebased the affected 4 patches in x86/kaslr 
> and used the opportunity to add Kees's Reviewed-by to the first 4 
> patches as well.
> 
> I've zapped tip:x86/kaslr for now and put the whole series into 
> tip:WIP.x86/kaslr, will move it into tip:x86/kaslr for a v5.9 merge 
> once Kees is happy with the latest version.
> 
> Kees, AFAICS your type truncation and patch split-up review 
> suggestions were resolved in v3?

I need to double-check, but I think so. I'm hoping to get to that on
Monday. My orphan section series work took MUCH longer than I thought it
was going to. :P

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 91+ messages in thread

* [PATCH 0/1] x86/kaslr: Replace strlen with strnlen
  2020-07-31 23:33         ` Kees Cook
@ 2020-08-03  1:15           ` Arvind Sankar
  2020-08-03  1:15             ` [PATCH 1/1] " Arvind Sankar
  2020-08-14 22:47           ` [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
  1 sibling, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-08-03  1:15 UTC (permalink / raw)
  To: Ingo Molnar, Kees Cook; +Cc: x86, linux-kernel

Hi,
I'd like to add one more patch to the series if that's ok.

Thanks.

Arvind Sankar (1):
  x86/kaslr: Replace strlen with strnlen

 arch/x86/boot/compressed/kaslr.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [PATCH 1/1] x86/kaslr: Replace strlen with strnlen
  2020-08-03  1:15           ` [PATCH 0/1] x86/kaslr: Replace strlen with strnlen Arvind Sankar
@ 2020-08-03  1:15             ` Arvind Sankar
  2020-08-06 23:38               ` [tip: x86/kaslr] x86/kaslr: Replace strlen() with strnlen() tip-bot2 for Arvind Sankar
  0 siblings, 1 reply; 91+ messages in thread
From: Arvind Sankar @ 2020-08-03  1:15 UTC (permalink / raw)
  To: Ingo Molnar, Kees Cook; +Cc: x86, linux-kernel

strnlen is safer in case the command line is not NUL-terminated.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
---
 arch/x86/boot/compressed/kaslr.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 903ccdca0551..a5842eea84cb 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -43,6 +43,10 @@
 #define STATIC
 #include <linux/decompress/mm.h>
 
+#define _SETUP
+#include <asm/setup.h>	/* For COMMAND_LINE_SIZE */
+#undef _SETUP
+
 #ifdef CONFIG_X86_5LEVEL
 unsigned int __pgtable_l5_enabled;
 unsigned int pgdir_shift __ro_after_init = 39;
@@ -278,7 +282,7 @@ static void handle_mem_options(void)
 	if (!args)
 		return;
 
-	len = strlen(args);
+	len = strnlen(args, COMMAND_LINE_SIZE-1);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
 		error("Failed to allocate space for tmp_cmdline");
@@ -425,7 +429,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	cmd_line = get_cmd_line_ptr();
 	/* Calculate size of cmd_line. */
 	if (cmd_line) {
-		cmd_line_size = strlen((char *)cmd_line) + 1;
+		cmd_line_size = strnlen((char *)cmd_line, COMMAND_LINE_SIZE-1) + 1;
 		mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
 		mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
 		add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Replace strlen() with strnlen()
  2020-08-03  1:15             ` [PATCH 1/1] " Arvind Sankar
@ 2020-08-06 23:38               ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     76167e5c5457aee8fba3edc5b8554183696fc94d
Gitweb:        https://git.kernel.org/tip/76167e5c5457aee8fba3edc5b8554183696fc94d
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Sun, 02 Aug 2020 21:15:34 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Thu, 06 Aug 2020 17:03:19 +02:00

x86/kaslr: Replace strlen() with strnlen()

strnlen is safer in case the command line is not NUL-terminated.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200803011534.730645-2-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 735fcb2..6d39743 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -43,6 +43,10 @@
 #define STATIC
 #include <linux/decompress/mm.h>
 
+#define _SETUP
+#include <asm/setup.h>	/* For COMMAND_LINE_SIZE */
+#undef _SETUP
+
 #ifdef CONFIG_X86_5LEVEL
 unsigned int __pgtable_l5_enabled;
 unsigned int pgdir_shift __ro_after_init = 39;
@@ -278,7 +282,7 @@ static void handle_mem_options(void)
 	if (!args)
 		return;
 
-	len = strlen(args);
+	len = strnlen(args, COMMAND_LINE_SIZE-1);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
 		error("Failed to allocate space for tmp_cmdline");
@@ -425,7 +429,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	cmd_line = get_cmd_line_ptr();
 	/* Calculate size of cmd_line. */
 	if (cmd_line) {
-		cmd_line_size = strlen((char *)cmd_line) + 1;
+		cmd_line_size = strnlen((char *)cmd_line, COMMAND_LINE_SIZE-1) + 1;
 		mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
 		mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
 		add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Add a check that the random address is in range
  2020-07-28 22:57   ` [PATCH v3 21/21] x86/kaslr: Add a check that the random address is in range Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     f49236ae424d499d02ee3ce35fb9130ddf95b03f
Gitweb:        https://git.kernel.org/tip/f49236ae424d499d02ee3ce35fb9130ddf95b03f
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:22 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Add a check that the random address is in range

Check in find_random_phys_addr() that the chosen address is inside the
range that was required.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-22-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 80cdd20..735fcb2 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -803,6 +803,8 @@ static void process_e820_entries(unsigned long minimum,
 static unsigned long find_random_phys_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
+	u64 phys_addr;
+
 	/* Bail out early if it's impossible to succeed. */
 	if (minimum + image_size > mem_limit)
 		return 0;
@@ -816,7 +818,15 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
 	if (!process_efi_entries(minimum, image_size))
 		process_e820_entries(minimum, image_size);
 
-	return slots_fetch_random();
+	phys_addr = slots_fetch_random();
+
+	/* Perform a final check to make sure the address is in range. */
+	if (phys_addr < minimum || phys_addr + image_size > mem_limit) {
+		warn("Invalid physical address chosen!\n");
+		return 0;
+	}
+
+	return (unsigned long)phys_addr;
 }
 
 static unsigned long find_random_virt_addr(unsigned long minimum,

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Make local variables 64-bit
  2020-07-28 22:57   ` [PATCH v3 20/21] x86/kaslr: Make local variables 64-bit Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     0eb1a8af01d6264cf948d67c8bff15e2eb859355
Gitweb:        https://git.kernel.org/tip/0eb1a8af01d6264cf948d67c8bff15e2eb859355
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:21 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Make local variables 64-bit

Change the type of local variables/fields that store mem_vector
addresses to u64 to make it less likely that 32-bit overflow will cause
issues on 32-bit.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-21-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index db8589c..80cdd20 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -461,7 +461,7 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 {
 	int i;
 	struct setup_data *ptr;
-	unsigned long earliest = img->start + img->size;
+	u64 earliest = img->start + img->size;
 	bool is_overlapping = false;
 
 	for (i = 0; i < MEM_AVOID_MAX; i++) {
@@ -506,7 +506,7 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 }
 
 struct slot_area {
-	unsigned long addr;
+	u64 addr;
 	unsigned long num;
 };
 
@@ -537,7 +537,8 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 static void
 process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 {
-	unsigned long pud_start, pud_end, gb_huge_pages;
+	u64 pud_start, pud_end;
+	unsigned long gb_huge_pages;
 	struct mem_vector tmp;
 
 	if (!IS_ENABLED(CONFIG_X86_64) || !max_gb_huge_pages) {
@@ -579,7 +580,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 	}
 }
 
-static unsigned long slots_fetch_random(void)
+static u64 slots_fetch_random(void)
 {
 	unsigned long slot;
 	unsigned int i;
@@ -595,7 +596,7 @@ static unsigned long slots_fetch_random(void)
 			slot -= slot_areas[i].num;
 			continue;
 		}
-		return slot_areas[i].addr + slot * CONFIG_PHYSICAL_ALIGN;
+		return slot_areas[i].addr + ((u64)slot * CONFIG_PHYSICAL_ALIGN);
 	}
 
 	if (i == slot_area_index)
@@ -608,7 +609,7 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long region_end;
+	u64 region_end;
 
 	/* Enforce minimum and memory limit. */
 	region.start = max_t(u64, entry->start, minimum);

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Replace 'unsigned long long' with 'u64'
  2020-07-28 22:57   ` [PATCH v3 19/21] x86/kaslr: Replace unsigned long long with u64 Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     3a066990a35eb289d54036637d2793d4743b8f07
Gitweb:        https://git.kernel.org/tip/3a066990a35eb289d54036637d2793d4743b8f07
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:20 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Replace 'unsigned long long' with 'u64'

No functional change.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-20-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 13 ++++++-------
 arch/x86/boot/compressed/misc.h  |  4 ++--
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 3244f5b..db8589c 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -98,7 +98,7 @@ static bool memmap_too_large;
  * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
  * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
  */
-static unsigned long long mem_limit;
+static u64 mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -141,8 +141,7 @@ enum parse_mode {
 };
 
 static int
-parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
-		enum parse_mode mode)
+parse_memmap(char *p, u64 *start, u64 *size, enum parse_mode mode)
 {
 	char *oldp;
 
@@ -172,7 +171,7 @@ parse_memmap(char *p, unsigned long long *start, unsigned long long *size,
 			 */
 			*size = 0;
 		} else {
-			unsigned long long flags;
+			u64 flags;
 
 			/*
 			 * efi_fake_mem=nn@ss:attr the attr specifies
@@ -211,7 +210,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 	while (str && (i < MAX_MEMMAP_REGIONS)) {
 		int rc;
-		unsigned long long start, size;
+		u64 start, size;
 		char *k = strchr(str, ',');
 
 		if (k)
@@ -612,7 +611,7 @@ static void __process_mem_region(struct mem_vector *entry,
 	unsigned long region_end;
 
 	/* Enforce minimum and memory limit. */
-	region.start = max_t(unsigned long long, entry->start, minimum);
+	region.start = max_t(u64, entry->start, minimum);
 	region_end = min(entry->start + entry->size, mem_limit);
 
 	/* Give up if slot area array is full. */
@@ -673,7 +672,7 @@ static bool process_mem_region(struct mem_vector *region,
 	 * immovable memory and @region.
 	 */
 	for (i = 0; i < num_immovable_mem; i++) {
-		unsigned long long start, end, entry_end, region_end;
+		u64 start, end, entry_end, region_end;
 		struct mem_vector entry;
 
 		if (!mem_overlaps(region, &immovable_mem[i]))
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 726e264..3efce27 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -70,8 +70,8 @@ int cmdline_find_option(const char *option, char *buffer, int bufsize);
 int cmdline_find_option_bool(const char *option);
 
 struct mem_vector {
-	unsigned long long start;
-	unsigned long long size;
+	u64 start;
+	u64 size;
 };
 
 #if CONFIG_RANDOMIZE_BASE

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Make minimum/image_size 'unsigned long'
  2020-07-28 22:57   ` [PATCH v3 18/21] x86/kaslr: Make minimum/image_size unsigned long Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     e4cb955bf173474a61f56200610004aacc7a62ff
Gitweb:        https://git.kernel.org/tip/e4cb955bf173474a61f56200610004aacc7a62ff
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:19 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Make minimum/image_size 'unsigned long'

Change type of minimum/image_size arguments in process_mem_region to
'unsigned long'. These actually can never be above 4G (even on x86_64),
and they're 'unsigned long' in every other function except this one.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-19-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index ecdf33d..3244f5b 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -649,8 +649,8 @@ static void __process_mem_region(struct mem_vector *entry,
 }
 
 static bool process_mem_region(struct mem_vector *region,
-			       unsigned long long minimum,
-			       unsigned long long image_size)
+			       unsigned long minimum,
+			       unsigned long image_size)
 {
 	int i;
 	/*

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr()
  2020-07-28 22:57   ` [PATCH v3 16/21] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     eb38be6db516fb72ccf7282628b545a185b3bc7a
Gitweb:        https://git.kernel.org/tip/eb38be6db516fb72ccf7282628b545a185b3bc7a
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:17 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Drop unnecessary alignment in find_random_virt_addr()

Drop unnecessary alignment of image_size to CONFIG_PHYSICAL_ALIGN in
find_random_virt_addr, it cannot change the result: the largest valid
slot is the largest n that satisfies

  minimum + n * CONFIG_PHYSICAL_ALIGN + image_size <= KERNEL_IMAGE_SIZE

(since minimum is already aligned) and so n is equal to

  (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN

even if image_size is not aligned to CONFIG_PHYSICAL_ALIGN.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-17-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 0c64026..ce34a05 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -825,16 +825,12 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
 {
 	unsigned long slots, random_addr;
 
-	/* Align image_size for easy slot calculations. */
-	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
-
 	/*
 	 * There are how many CONFIG_PHYSICAL_ALIGN-sized slots
 	 * that can hold image_size within the range of minimum to
 	 * KERNEL_IMAGE_SIZE?
 	 */
-	slots = (KERNEL_IMAGE_SIZE - minimum - image_size) /
-		 CONFIG_PHYSICAL_ALIGN + 1;
+	slots = 1 + (KERNEL_IMAGE_SIZE - minimum - image_size) / CONFIG_PHYSICAL_ALIGN;
 
 	random_addr = kaslr_get_random_long("Virtual") % slots;
 

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Small cleanup of find_random_phys_addr()
  2020-07-28 22:57   ` [PATCH v3 17/21] x86/kaslr: Small cleanup of find_random_phys_addr Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     4268b4da572f8bde8bc2f3243927ff5795687a6f
Gitweb:        https://git.kernel.org/tip/4268b4da572f8bde8bc2f3243927ff5795687a6f
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:18 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Small cleanup of find_random_phys_addr()

Just a trivial rearrangement to do all the processing together, and only
have one call to slots_fetch_random() in the source.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-18-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index ce34a05..ecdf33d 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -813,10 +813,9 @@ static unsigned long find_random_phys_addr(unsigned long minimum,
 		return 0;
 	}
 
-	if (process_efi_entries(minimum, image_size))
-		return slots_fetch_random();
+	if (!process_efi_entries(minimum, image_size))
+		process_e820_entries(minimum, image_size);
 
-	process_e820_entries(minimum, image_size);
 	return slots_fetch_random();
 }
 

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Drop redundant check in store_slot_info()
  2020-07-28 22:57   ` [PATCH v3 15/21] x86/kaslr: Drop redundant check in store_slot_info Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     46a5b29a4a63a3ba987cbb5467774a4b5787618e
Gitweb:        https://git.kernel.org/tip/46a5b29a4a63a3ba987cbb5467774a4b5787618e
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:16 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Drop redundant check in store_slot_info()

Drop unnecessary check that number of slots is not zero in
store_slot_info, it's guaranteed to be at least 1 by the calculation.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-16-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c |  9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 5c7457c..0c64026 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -525,13 +525,10 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 		return;
 
 	slot_area.addr = region->start;
-	slot_area.num = (region->size - image_size) /
-			CONFIG_PHYSICAL_ALIGN + 1;
+	slot_area.num = 1 + (region->size - image_size) / CONFIG_PHYSICAL_ALIGN;
 
-	if (slot_area.num > 0) {
-		slot_areas[slot_area_index++] = slot_area;
-		slot_max += slot_area.num;
-	}
+	slot_areas[slot_area_index++] = slot_area;
+	slot_max += slot_area.num;
 }
 
 /*

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Drop test for command-line parameters before parsing
  2020-07-28 22:57   ` [PATCH v3 13/21] x86/kaslr: Drop test for command-line parameters before parsing Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     3870d971791f13df88a7a656e3fd6e2df8686097
Gitweb:        https://git.kernel.org/tip/3870d971791f13df88a7a656e3fd6e2df8686097
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:14 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Drop test for command-line parameters before parsing

This check doesn't save anything. In the case when none of the
parameters are present, each strstr will scan args twice (once to find
the length and then for searching), six scans in total. Just going ahead
and parsing the arguments only requires three scans: strlen, memcpy, and
parsing. This will be the first malloc, so free will actually free up
the memory, so the check doesn't save heap space either.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-14-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 00ef84b..bd13dc5 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -279,10 +279,6 @@ static void handle_mem_options(void)
 	if (!args)
 		return;
 
-	if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
-		!strstr(args, "hugepages"))
-		return;
-
 	len = strlen(args);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Make the type of number of slots/slot areas consistent
  2020-07-28 22:57   ` [PATCH v3 14/21] x86/kaslr: Make the type of number of slots/slot areas consistent Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     d6d0f36c735367ed7cf42b5ba454ba5858e17816
Gitweb:        https://git.kernel.org/tip/d6d0f36c735367ed7cf42b5ba454ba5858e17816
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:15 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Make the type of number of slots/slot areas consistent

The number of slots can be 'unsigned int', since on 64-bit, the maximum
amount of memory is 2^52, the minimum alignment is 2^21, so the slot
number cannot be greater than 2^31. But in case future processors have
more than 52 physical address bits, make it 'unsigned long'.

The slot areas are limited by MAX_SLOT_AREA, currently 100. It is
indexed by an int, but the number of areas is stored as 'unsigned long'.
Change both to 'unsigned int' for consistency.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-15-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index bd13dc5..5c7457c 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -508,17 +508,15 @@ static bool mem_avoid_overlap(struct mem_vector *img,
 
 struct slot_area {
 	unsigned long addr;
-	int num;
+	unsigned long num;
 };
 
 #define MAX_SLOT_AREA 100
 
 static struct slot_area slot_areas[MAX_SLOT_AREA];
-
+static unsigned int slot_area_index;
 static unsigned long slot_max;
 
-static unsigned long slot_area_index;
-
 static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 {
 	struct slot_area slot_area;
@@ -588,7 +586,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 static unsigned long slots_fetch_random(void)
 {
 	unsigned long slot;
-	int i;
+	unsigned int i;
 
 	/* Handle case of no slots stored. */
 	if (slot_max == 0)

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Short-circuit gb_huge_pages on x86-32
  2020-07-28 22:57   ` [PATCH v3 11/21] x86/kaslr: Short-circuit gb_huge_pages on x86-32 Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     50def2693a900dfb1d91872056dc8164245820fc
Gitweb:        https://git.kernel.org/tip/50def2693a900dfb1d91872056dc8164245820fc
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:12 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Short-circuit gb_huge_pages on x86-32

32-bit does not have GB pages, so don't bother checking for them. Using
the IS_ENABLED() macro allows the compiler to completely remove the
gb_huge_pages code.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-12-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 0df513e..3727e97 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -303,7 +303,7 @@ static void handle_mem_options(void)
 
 		if (!strcmp(param, "memmap")) {
 			mem_avoid_memmap(PARSE_MEMMAP, val);
-		} else if (strstr(param, "hugepages")) {
+		} else if (IS_ENABLED(CONFIG_X86_64) && strstr(param, "hugepages")) {
 			parse_gb_huge_pages(param, val);
 		} else if (!strcmp(param, "mem")) {
 			char *p = val;
@@ -551,7 +551,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 	struct mem_vector tmp;
 	int i = 0;
 
-	if (!max_gb_huge_pages) {
+	if (!IS_ENABLED(CONFIG_X86_64) || !max_gb_huge_pages) {
 		store_slot_info(region, image_size);
 		return;
 	}

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Simplify process_gb_huge_pages()
  2020-07-28 22:57   ` [PATCH v3 12/21] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     be9e8d9541a95bdfac1c13d112cc032ea7fc745f
Gitweb:        https://git.kernel.org/tip/be9e8d9541a95bdfac1c13d112cc032ea7fc745f
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:13 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Simplify process_gb_huge_pages()

Replace the loop to determine the number of 1Gb pages with arithmetic.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-13-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 47 +++++++++++++------------------
 1 file changed, 21 insertions(+), 26 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 3727e97..00ef84b 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -547,49 +547,44 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
 static void
 process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 {
-	unsigned long addr, size = 0;
+	unsigned long pud_start, pud_end, gb_huge_pages;
 	struct mem_vector tmp;
-	int i = 0;
 
 	if (!IS_ENABLED(CONFIG_X86_64) || !max_gb_huge_pages) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	addr = ALIGN(region->start, PUD_SIZE);
-	/* Did we raise the address above the passed in memory entry? */
-	if (addr < region->start + region->size)
-		size = region->size - (addr - region->start);
-
-	/* Check how many 1GB huge pages can be filtered out: */
-	while (size >= PUD_SIZE && max_gb_huge_pages) {
-		size -= PUD_SIZE;
-		max_gb_huge_pages--;
-		i++;
-	}
+	/* Are there any 1GB pages in the region? */
+	pud_start = ALIGN(region->start, PUD_SIZE);
+	pud_end = ALIGN_DOWN(region->start + region->size, PUD_SIZE);
 
 	/* No good 1GB huge pages found: */
-	if (!i) {
+	if (pud_start >= pud_end) {
 		store_slot_info(region, image_size);
 		return;
 	}
 
-	/*
-	 * Skip those 'i'*1GB good huge pages, and continue checking and
-	 * processing the remaining head or tail part of the passed region
-	 * if available.
-	 */
-
-	if (addr >= region->start + image_size) {
+	/* Check if the head part of the region is usable. */
+	if (pud_start >= region->start + image_size) {
 		tmp.start = region->start;
-		tmp.size = addr - region->start;
+		tmp.size = pud_start - region->start;
 		store_slot_info(&tmp, image_size);
 	}
 
-	size  = region->size - (addr - region->start) - i * PUD_SIZE;
-	if (size >= image_size) {
-		tmp.start = addr + i * PUD_SIZE;
-		tmp.size = size;
+	/* Skip the good 1GB pages. */
+	gb_huge_pages = (pud_end - pud_start) >> PUD_SHIFT;
+	if (gb_huge_pages > max_gb_huge_pages) {
+		pud_end = pud_start + (max_gb_huge_pages << PUD_SHIFT);
+		max_gb_huge_pages = 0;
+	} else {
+		max_gb_huge_pages -= gb_huge_pages;
+	}
+
+	/* Check if the tail part of the region is usable. */
+	if (region->start + region->size >= pud_end + image_size) {
+		tmp.start = pud_end;
+		tmp.size = region->start + region->size - pud_end;
 		store_slot_info(&tmp, image_size);
 	}
 }

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Fix off-by-one error in process_gb_huge_pages()
  2020-07-28 22:57   ` [PATCH v3 10/21] x86/kaslr: Fix off-by-one error in process_gb_huge_pages Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     79c2fd2afe55944098047721c33e06fd48654e57
Gitweb:        https://git.kernel.org/tip/79c2fd2afe55944098047721c33e06fd48654e57
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:11 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Fix off-by-one error in process_gb_huge_pages()

If the remaining size of the region is exactly 1Gb, there is still one
hugepage that can be reserved.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-11-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d074986..0df513e 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -562,7 +562,7 @@ process_gb_huge_pages(struct mem_vector *region, unsigned long image_size)
 		size = region->size - (addr - region->start);
 
 	/* Check how many 1GB huge pages can be filtered out: */
-	while (size > PUD_SIZE && max_gb_huge_pages) {
+	while (size >= PUD_SIZE && max_gb_huge_pages) {
 		size -= PUD_SIZE;
 		max_gb_huge_pages--;
 		i++;

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Drop some redundant checks from __process_mem_region()
  2020-07-28 22:57   ` [PATCH v3 09/21] x86/kaslr: Drop some redundant checks from __process_mem_region Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     bf457be1548eee6d106daf9604e029b36fed2b11
Gitweb:        https://git.kernel.org/tip/bf457be1548eee6d106daf9604e029b36fed2b11
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:10 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Drop some redundant checks from __process_mem_region()

Clip the start and end of the region to minimum and mem_limit prior to
the loop. region.start can only increase during the loop, so raising it
to minimum before the loop is enough.

A region that becomes empty due to this will get checked in
the first iteration of the loop.

Drop the check for overlap extending beyond the end of the region. This
will get checked in the next loop iteration anyway.

Rename end to region_end for symmetry with region.start.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-10-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 27 ++++++---------------------
 1 file changed, 6 insertions(+), 21 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 8cc47fa..d074986 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -623,34 +623,23 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long end;
+	unsigned long region_end;
 
-	/* Ignore entries entirely below our minimum. */
-	if (entry->start + entry->size < minimum)
-		return;
-
-	/* Ignore entries above memory limit */
-	end = min(entry->size + entry->start, mem_limit);
-	if (entry->start >= end)
-		return;
-
-	region.start = entry->start;
+	/* Enforce minimum and memory limit. */
+	region.start = max_t(unsigned long long, entry->start, minimum);
+	region_end = min(entry->start + entry->size, mem_limit);
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
-		/* Potentially raise address to minimum location. */
-		if (region.start < minimum)
-			region.start = minimum;
-
 		/* Potentially raise address to meet alignment needs. */
 		region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
 
 		/* Did we raise the address above the passed in memory entry? */
-		if (region.start > end)
+		if (region.start > region_end)
 			return;
 
 		/* Reduce size by any delta from the original address. */
-		region.size = end - region.start;
+		region.size = region_end - region.start;
 
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
@@ -668,10 +657,6 @@ static void __process_mem_region(struct mem_vector *entry,
 			process_gb_huge_pages(&region, image_size);
 		}
 
-		/* Return if overlap extends to or past end of region. */
-		if (overlap.start + overlap.size >= region.start + region.size)
-			return;
-
 		/* Clip off the overlapping region and start over. */
 		region.start = overlap.start + overlap.size;
 	}

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Drop redundant variable in __process_mem_region()
  2020-07-28 22:57   ` [PATCH v3 08/21] x86/kaslr: Drop redundant variable in __process_mem_region Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     ef7b07d59e2f18042622cecde0c7a89b60f33a89
Gitweb:        https://git.kernel.org/tip/ef7b07d59e2f18042622cecde0c7a89b60f33a89
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:09 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Drop redundant variable in __process_mem_region()

region.size can be trimmed to store the portion of the region before the
overlap, instead of a separate mem_vector variable.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-9-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index e978c35..8cc47fa 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -664,11 +664,8 @@ static void __process_mem_region(struct mem_vector *entry,
 
 		/* Store beginning of region if holds at least image_size. */
 		if (overlap.start >= region.start + image_size) {
-			struct mem_vector beginning;
-
-			beginning.start = region.start;
-			beginning.size = overlap.start - region.start;
-			process_gb_huge_pages(&beginning, image_size);
+			region.size = overlap.start - region.start;
+			process_gb_huge_pages(&region, image_size);
 		}
 
 		/* Return if overlap extends to or past end of region. */

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Eliminate 'start_orig' local variable from __process_mem_region()
  2020-07-28 22:57   ` [PATCH v3 07/21] x86/kaslr: Eliminate start_orig from __process_mem_region Arvind Sankar
@ 2020-08-06 23:38     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:38 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     ee435ee6490d147c1b9963cc8b331665e4cea634
Gitweb:        https://git.kernel.org/tip/ee435ee6490d147c1b9963cc8b331665e4cea634
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:08 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Eliminate 'start_orig' local variable from __process_mem_region()

Set the region.size within the loop, which removes the need for
start_orig.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-8-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index f2454ee..e978c35 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -623,7 +623,7 @@ static void __process_mem_region(struct mem_vector *entry,
 				 unsigned long image_size)
 {
 	struct mem_vector region, overlap;
-	unsigned long start_orig, end;
+	unsigned long end;
 
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
@@ -635,12 +635,9 @@ static void __process_mem_region(struct mem_vector *entry,
 		return;
 
 	region.start = entry->start;
-	region.size = end - entry->start;
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
-		start_orig = region.start;
-
 		/* Potentially raise address to minimum location. */
 		if (region.start < minimum)
 			region.start = minimum;
@@ -653,7 +650,7 @@ static void __process_mem_region(struct mem_vector *entry,
 			return;
 
 		/* Reduce size by any delta from the original address. */
-		region.size -= region.start - start_orig;
+		region.size = end - region.start;
 
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
@@ -679,7 +676,6 @@ static void __process_mem_region(struct mem_vector *entry,
 			return;
 
 		/* Clip off the overlapping region and start over. */
-		region.size -= overlap.start - region.start + overlap.size;
 		region.start = overlap.start + overlap.size;
 	}
 }

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Drop redundant cur_entry from __process_mem_region()
  2020-07-28 22:57   ` [PATCH v3 06/21] x86/kaslr: Drop redundant cur_entry from __process_mem_region Arvind Sankar
@ 2020-08-06 23:39     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     3f9412c73053a5be311607e42560c1303a873be7
Gitweb:        https://git.kernel.org/tip/3f9412c73053a5be311607e42560c1303a873be7
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:07 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Drop redundant cur_entry from __process_mem_region()

cur_entry is only used as cur_entry.start + cur_entry.size, which is
always equal to end.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-7-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c |  9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 848346f..f2454ee 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -624,7 +624,6 @@ static void __process_mem_region(struct mem_vector *entry,
 {
 	struct mem_vector region, overlap;
 	unsigned long start_orig, end;
-	struct mem_vector cur_entry;
 
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
@@ -634,11 +633,9 @@ static void __process_mem_region(struct mem_vector *entry,
 	end = min(entry->size + entry->start, mem_limit);
 	if (entry->start >= end)
 		return;
-	cur_entry.start = entry->start;
-	cur_entry.size = end - entry->start;
 
-	region.start = cur_entry.start;
-	region.size = cur_entry.size;
+	region.start = entry->start;
+	region.size = end - entry->start;
 
 	/* Give up if slot area array is full. */
 	while (slot_area_index < MAX_SLOT_AREA) {
@@ -652,7 +649,7 @@ static void __process_mem_region(struct mem_vector *entry,
 		region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
 
 		/* Did we raise the address above the passed in memory entry? */
-		if (region.start > cur_entry.start + cur_entry.size)
+		if (region.start > end)
 			return;
 
 		/* Reduce size by any delta from the original address. */

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Fix off-by-one error in __process_mem_region()
  2020-07-28 22:57   ` [PATCH v3 05/21] x86/kaslr: Fix off-by-one error in __process_mem_region Arvind Sankar
@ 2020-08-06 23:39     ` tip-bot2 for Arvind Sankar
  0 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     8d1cf8595860f4807f4ff1f8f1fc53e7576e0d71
Gitweb:        https://git.kernel.org/tip/8d1cf8595860f4807f4ff1f8f1fc53e7576e0d71
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Tue, 28 Jul 2020 18:57:06 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Fix off-by-one error in __process_mem_region()

In case of an overlap, the beginning of the region should be used even
if it is exactly image_size, not just strictly larger.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200728225722.67457-6-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index da45e66..848346f 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -669,7 +669,7 @@ static void __process_mem_region(struct mem_vector *entry,
 		}
 
 		/* Store beginning of region if holds at least image_size. */
-		if (overlap.start > region.start + image_size) {
+		if (overlap.start >= region.start + image_size) {
 			struct mem_vector beginning;
 
 			beginning.start = region.start;

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Initialize mem_limit to the real maximum address
  2020-07-27 23:07 ` [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 18:49   ` [PATCH v2 4/8] " Kees Cook
@ 2020-08-06 23:39   ` tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, Kees Cook, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     451286940d95778e83fa7f97006316d995b4c4a8
Gitweb:        https://git.kernel.org/tip/451286940d95778e83fa7f97006316d995b4c4a8
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:57 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:17 +02:00

x86/kaslr: Initialize mem_limit to the real maximum address

On 64-bit, the kernel must be placed below MAXMEM (64TiB with 4-level
paging or 4PiB with 5-level paging). This is currently not enforced by
KASLR, which thus implicitly relies on physical memory being limited to
less than 64TiB.

On 32-bit, the limit is KERNEL_IMAGE_SIZE (512MiB). This is enforced by
special checks in __process_mem_region().

Initialize mem_limit to the maximum (depending on architecture), instead
of ULLONG_MAX, and make sure the command-line arguments can only
decrease it. This makes the enforcement explicit on 64-bit, and
eliminates the 32-bit specific checks to keep the kernel below 512M.

Check upfront to make sure the minimum address is below the limit before
doing any work.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-5-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 41 ++++++++++++++++---------------
 1 file changed, 22 insertions(+), 19 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 1ab67a8..da45e66 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -94,8 +94,11 @@ static unsigned long get_boot_seed(void)
 static bool memmap_too_large;
 
 
-/* Store memory limit specified by "mem=nn[KMG]" or "memmap=nn[KMG]" */
-static unsigned long long mem_limit = ULLONG_MAX;
+/*
+ * Store memory limit: MAXMEM on 64-bit and KERNEL_IMAGE_SIZE on 32-bit.
+ * It may be reduced by "mem=nn[KMG]" or "memmap=nn[KMG]" command line options.
+ */
+static unsigned long long mem_limit;
 
 /* Number of immovable memory regions */
 static int num_immovable_mem;
@@ -221,7 +224,7 @@ static void mem_avoid_memmap(enum parse_mode mode, char *str)
 
 		if (start == 0) {
 			/* Store the specified memory limit if size > 0 */
-			if (size > 0)
+			if (size > 0 && size < mem_limit)
 				mem_limit = size;
 
 			continue;
@@ -311,7 +314,8 @@ static void handle_mem_options(void)
 			if (mem_size == 0)
 				break;
 
-			mem_limit = mem_size;
+			if (mem_size < mem_limit)
+				mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
 			mem_avoid_memmap(PARSE_EFI, val);
 		}
@@ -322,7 +326,9 @@ static void handle_mem_options(void)
 }
 
 /*
- * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
+ * In theory, KASLR can put the kernel anywhere in the range of [16M, MAXMEM)
+ * on 64-bit, and [16M, KERNEL_IMAGE_SIZE) on 32-bit.
+ *
  * The mem_avoid array is used to store the ranges that need to be avoided
  * when KASLR searches for an appropriate random address. We must avoid any
  * regions that are unsafe to overlap with during decompression, and other
@@ -620,10 +626,6 @@ static void __process_mem_region(struct mem_vector *entry,
 	unsigned long start_orig, end;
 	struct mem_vector cur_entry;
 
-	/* On 32-bit, ignore entries entirely above our maximum. */
-	if (IS_ENABLED(CONFIG_X86_32) && entry->start >= KERNEL_IMAGE_SIZE)
-		return;
-
 	/* Ignore entries entirely below our minimum. */
 	if (entry->start + entry->size < minimum)
 		return;
@@ -656,11 +658,6 @@ static void __process_mem_region(struct mem_vector *entry,
 		/* Reduce size by any delta from the original address. */
 		region.size -= region.start - start_orig;
 
-		/* On 32-bit, reduce region size to fit within max size. */
-		if (IS_ENABLED(CONFIG_X86_32) &&
-		    region.start + region.size > KERNEL_IMAGE_SIZE)
-			region.size = KERNEL_IMAGE_SIZE - region.start;
-
 		/* Return if region can't contain decompressed kernel */
 		if (region.size < image_size)
 			return;
@@ -845,15 +842,16 @@ static void process_e820_entries(unsigned long minimum,
 static unsigned long find_random_phys_addr(unsigned long minimum,
 					   unsigned long image_size)
 {
+	/* Bail out early if it's impossible to succeed. */
+	if (minimum + image_size > mem_limit)
+		return 0;
+
 	/* Check if we had too many memmaps. */
 	if (memmap_too_large) {
 		debug_putstr("Aborted memory entries scan (more than 4 memmap= args)!\n");
 		return 0;
 	}
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
-
 	if (process_efi_entries(minimum, image_size))
 		return slots_fetch_random();
 
@@ -866,8 +864,6 @@ static unsigned long find_random_virt_addr(unsigned long minimum,
 {
 	unsigned long slots, random_addr;
 
-	/* Make sure minimum is aligned. */
-	minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
 	/* Align image_size for easy slot calculations. */
 	image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
 
@@ -914,6 +910,11 @@ void choose_random_location(unsigned long input,
 	/* Prepare to add new identity pagetables on demand. */
 	initialize_identity_maps();
 
+	if (IS_ENABLED(CONFIG_X86_32))
+		mem_limit = KERNEL_IMAGE_SIZE;
+	else
+		mem_limit = MAXMEM;
+
 	/* Record the various known unsafe memory ranges. */
 	mem_avoid_init(input, input_size, *output);
 
@@ -923,6 +924,8 @@ void choose_random_location(unsigned long input,
 	 * location:
 	 */
 	min_addr = min(*output, 512UL << 20);
+	/* Make sure minimum is aligned. */
+	min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
 
 	/* Walk available memory entries to find a random address. */
 	random_addr = find_random_phys_addr(min_addr, output_size);

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Remove bogus warning and unnecessary goto
  2020-07-27 23:07 ` [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 18:26   ` [PATCH v2 2/8] " Kees Cook
@ 2020-08-06 23:39   ` tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, Kees Cook, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     e2ee6173162b28053cb76b25887a0be9331c9e21
Gitweb:        https://git.kernel.org/tip/e2ee6173162b28053cb76b25887a0be9331c9e21
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:55 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:07 +02:00

x86/kaslr: Remove bogus warning and unnecessary goto

Drop the warning on seeing "--" in handle_mem_options(). This will trigger
whenever one of the memory options is present in the command line
together with "--", but there's no problem if that is the case.

Replace goto with break.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-3-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c |  9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index e0f69f3..c31f3a5 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -295,10 +295,8 @@ static void handle_mem_options(void)
 	while (*args) {
 		args = next_arg(args, &param, &val);
 		/* Stop at -- */
-		if (!val && strcmp(param, "--") == 0) {
-			warn("Only '--' specified in cmdline");
-			goto out;
-		}
+		if (!val && strcmp(param, "--") == 0)
+			break;
 
 		if (!strcmp(param, "memmap")) {
 			mem_avoid_memmap(PARSE_MEMMAP, val);
@@ -311,7 +309,7 @@ static void handle_mem_options(void)
 				continue;
 			mem_size = memparse(p, &p);
 			if (mem_size == 0)
-				goto out;
+				break;
 
 			mem_limit = mem_size;
 		} else if (!strcmp(param, "efi_fake_mem")) {
@@ -319,7 +317,6 @@ static void handle_mem_options(void)
 		}
 	}
 
-out:
 	free(tmp_cmdline);
 	return;
 }

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Fix process_efi_entries comment
  2020-07-27 23:07 ` [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 18:27   ` [PATCH v2 3/8] " Kees Cook
@ 2020-08-06 23:39   ` tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, Kees Cook, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     08705365560a352d3f5b4f1f52270b4d4ff7911e
Gitweb:        https://git.kernel.org/tip/08705365560a352d3f5b4f1f52270b4d4ff7911e
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:56 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:08:12 +02:00

x86/kaslr: Fix process_efi_entries comment

Since commit:

  0982adc74673 ("x86/boot/KASLR: Work around firmware bugs by excluding EFI_BOOT_SERVICES_* and EFI_LOADER_* from KASLR's choice")

process_efi_entries() will return true if we have an EFI memmap, not just
if it contained EFI_MEMORY_MORE_RELIABLE regions.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-4-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index c31f3a5..1ab67a8 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -742,8 +742,8 @@ static bool process_mem_region(struct mem_vector *region,
 
 #ifdef CONFIG_EFI
 /*
- * Returns true if mirror region found (and must have been processed
- * for slots adding)
+ * Returns true if we processed the EFI memmap, which we prefer over the E820
+ * table if it is available.
  */
 static bool
 process_efi_entries(unsigned long minimum, unsigned long image_size)

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* [tip: x86/kaslr] x86/kaslr: Make command line handling safer
  2020-07-27 23:07 ` [PATCH v2 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
  2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
  2020-07-28 18:26   ` [PATCH v2 1/8] " Kees Cook
@ 2020-08-06 23:39   ` tip-bot2 for Arvind Sankar
  2 siblings, 0 replies; 91+ messages in thread
From: tip-bot2 for Arvind Sankar @ 2020-08-06 23:39 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: Arvind Sankar, Ingo Molnar, Kees Cook, x86, LKML

The following commit has been merged into the x86/kaslr branch of tip:

Commit-ID:     709709ac6410f4a14ded158a4b23b979e33e10fb
Gitweb:        https://git.kernel.org/tip/709709ac6410f4a14ded158a4b23b979e33e10fb
Author:        Arvind Sankar <nivedita@alum.mit.edu>
AuthorDate:    Mon, 27 Jul 2020 19:07:54 -04:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Fri, 31 Jul 2020 11:07:51 +02:00

x86/kaslr: Make command line handling safer

Handle the possibility that the command line is NULL.

Replace open-coded strlen with a function call.

Signed-off-by: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20200727230801.3468620-2-nivedita@alum.mit.edu
---
 arch/x86/boot/compressed/kaslr.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index d7408af..e0f69f3 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -268,15 +268,19 @@ static void parse_gb_huge_pages(char *param, char *val)
 static void handle_mem_options(void)
 {
 	char *args = (char *)get_cmd_line_ptr();
-	size_t len = strlen((char *)args);
+	size_t len;
 	char *tmp_cmdline;
 	char *param, *val;
 	u64 mem_size;
 
+	if (!args)
+		return;
+
 	if (!strstr(args, "memmap=") && !strstr(args, "mem=") &&
 		!strstr(args, "hugepages"))
 		return;
 
+	len = strlen(args);
 	tmp_cmdline = malloc(len + 1);
 	if (!tmp_cmdline)
 		error("Failed to allocate space for tmp_cmdline");
@@ -399,8 +403,7 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 {
 	unsigned long init_size = boot_params->hdr.init_size;
 	u64 initrd_start, initrd_size;
-	u64 cmd_line, cmd_line_size;
-	char *ptr;
+	unsigned long cmd_line, cmd_line_size;
 
 	/*
 	 * Avoid the region that is unsafe to overlap during
@@ -421,16 +424,15 @@ static void mem_avoid_init(unsigned long input, unsigned long input_size,
 	/* No need to set mapping for initrd, it will be handled in VO. */
 
 	/* Avoid kernel command line. */
-	cmd_line  = (u64)boot_params->ext_cmd_line_ptr << 32;
-	cmd_line |= boot_params->hdr.cmd_line_ptr;
+	cmd_line = get_cmd_line_ptr();
 	/* Calculate size of cmd_line. */
-	ptr = (char *)(unsigned long)cmd_line;
-	for (cmd_line_size = 0; ptr[cmd_line_size++];)
-		;
-	mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
-	mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
-	add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
-			 mem_avoid[MEM_AVOID_CMDLINE].size);
+	if (cmd_line) {
+		cmd_line_size = strlen((char *)cmd_line) + 1;
+		mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
+		mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
+		add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
+				 mem_avoid[MEM_AVOID_CMDLINE].size);
+	}
 
 	/* Avoid boot parameters. */
 	mem_avoid[MEM_AVOID_BOOTPARAMS].start = (unsigned long)boot_params;

^ permalink raw reply related	[flat|nested] 91+ messages in thread

* Re: [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes
  2020-07-31 23:33         ` Kees Cook
  2020-08-03  1:15           ` [PATCH 0/1] x86/kaslr: Replace strlen with strnlen Arvind Sankar
@ 2020-08-14 22:47           ` Arvind Sankar
  1 sibling, 0 replies; 91+ messages in thread
From: Arvind Sankar @ 2020-08-14 22:47 UTC (permalink / raw)
  To: Kees Cook; +Cc: Ingo Molnar, Arvind Sankar, x86, linux-kernel

On Fri, Jul 31, 2020 at 04:33:35PM -0700, Kees Cook wrote:
> On Fri, Jul 31, 2020 at 11:21:46AM +0200, Ingo Molnar wrote:
> > 
> > * Arvind Sankar <nivedita@alum.mit.edu> wrote:
> > 
> > > On Tue, Jul 28, 2020 at 06:57:01PM -0400, Arvind Sankar wrote:
> > > > v2->v3:
> > > > - Fix the first patch: command line size should be strlen + 1 to account
> > > >   for terminating NUL. Avoid calling add_identity_map if cmdline was
> > > >   NULL, though it should do nothing in that case anyway.
> > > 
> > > Hi Ingo, I noticed that WIP.x86/kaslr and x86/kaslr both have the v2
> > > version of the first patch. That has a bug in the cmd_line_size
> > > calculation (missing the +1).
> > 
> > Indeed, well spotted. I rebased the affected 4 patches in x86/kaslr 
> > and used the opportunity to add Kees's Reviewed-by to the first 4 
> > patches as well.
> > 
> > I've zapped tip:x86/kaslr for now and put the whole series into 
> > tip:WIP.x86/kaslr, will move it into tip:x86/kaslr for a v5.9 merge 
> > once Kees is happy with the latest version.
> > 
> > Kees, AFAICS your type truncation and patch split-up review 
> > suggestions were resolved in v3?
> 
> I need to double-check, but I think so. I'm hoping to get to that on
> Monday. My orphan section series work took MUCH longer than I thought it
> was going to. :P
> 
> -- 
> Kees Cook

Hey Kees, did you get a chance to review?

Thanks.

^ permalink raw reply	[flat|nested] 91+ messages in thread

end of thread, other threads:[~2020-08-14 22:47 UTC | newest]

Thread overview: 91+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-27 21:50 [PATCH 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
2020-07-27 21:50 ` [PATCH 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
2020-07-27 21:50 ` [PATCH 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
2020-07-27 21:50 ` [PATCH 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
2020-07-27 21:50 ` [PATCH 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
2020-07-27 21:50 ` [PATCH 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
2020-07-28 10:57   ` Ingo Molnar
2020-07-27 21:50 ` [PATCH 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
2020-07-28 10:58   ` Ingo Molnar
2020-07-27 21:50 ` [PATCH 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
2020-07-28 10:59   ` Ingo Molnar
2020-07-27 21:50 ` [PATCH 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
2020-07-28 11:05   ` Ingo Molnar
2020-07-27 23:07 ` [PATCH v2 0/8] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
2020-07-28 11:06   ` Ingo Molnar
2020-07-28 14:24     ` Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 00/21] " Arvind Sankar
2020-07-30 18:02     ` Arvind Sankar
2020-07-31  9:21       ` Ingo Molnar
2020-07-31 23:33         ` Kees Cook
2020-08-03  1:15           ` [PATCH 0/1] x86/kaslr: Replace strlen with strnlen Arvind Sankar
2020-08-03  1:15             ` [PATCH 1/1] " Arvind Sankar
2020-08-06 23:38               ` [tip: x86/kaslr] x86/kaslr: Replace strlen() with strnlen() tip-bot2 for Arvind Sankar
2020-08-14 22:47           ` [PATCH v3 00/21] x86/kaslr: Cleanup and small bugfixes Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 01/21] x86/kaslr: Make command line handling safer Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 02/21] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 03/21] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 04/21] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 05/21] x86/kaslr: Fix off-by-one error in __process_mem_region Arvind Sankar
2020-08-06 23:39     ` [tip: x86/kaslr] x86/kaslr: Fix off-by-one error in __process_mem_region() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 06/21] x86/kaslr: Drop redundant cur_entry from __process_mem_region Arvind Sankar
2020-08-06 23:39     ` [tip: x86/kaslr] x86/kaslr: Drop redundant cur_entry from __process_mem_region() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 07/21] x86/kaslr: Eliminate start_orig from __process_mem_region Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Eliminate 'start_orig' local variable from __process_mem_region() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 08/21] x86/kaslr: Drop redundant variable in __process_mem_region Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop redundant variable in __process_mem_region() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 09/21] x86/kaslr: Drop some redundant checks from __process_mem_region Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop some redundant checks from __process_mem_region() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 10/21] x86/kaslr: Fix off-by-one error in process_gb_huge_pages Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Fix off-by-one error in process_gb_huge_pages() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 11/21] x86/kaslr: Short-circuit gb_huge_pages on x86-32 Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 12/21] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Simplify process_gb_huge_pages() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 13/21] x86/kaslr: Drop test for command-line parameters before parsing Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 14/21] x86/kaslr: Make the type of number of slots/slot areas consistent Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 15/21] x86/kaslr: Drop redundant check in store_slot_info Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop redundant check in store_slot_info() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 16/21] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Drop unnecessary alignment in find_random_virt_addr() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 17/21] x86/kaslr: Small cleanup of find_random_phys_addr Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Small cleanup of find_random_phys_addr() tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 18/21] x86/kaslr: Make minimum/image_size unsigned long Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Make minimum/image_size 'unsigned long' tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 19/21] x86/kaslr: Replace unsigned long long with u64 Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] x86/kaslr: Replace 'unsigned long long' with 'u64' tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 20/21] x86/kaslr: Make local variables 64-bit Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 22:57   ` [PATCH v3 21/21] x86/kaslr: Add a check that the random address is in range Arvind Sankar
2020-08-06 23:38     ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-27 23:07 ` [PATCH v2 1/8] x86/kaslr: Make command line handling safer Arvind Sankar
2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 18:26   ` [PATCH v2 1/8] " Kees Cook
2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-27 23:07 ` [PATCH v2 2/8] x86/kaslr: Remove bogus warning and unnecessary goto Arvind Sankar
2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 18:26   ` [PATCH v2 2/8] " Kees Cook
2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-27 23:07 ` [PATCH v2 3/8] x86/kaslr: Fix process_efi_entries comment Arvind Sankar
2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 18:27   ` [PATCH v2 3/8] " Kees Cook
2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-27 23:07 ` [PATCH v2 4/8] x86/kaslr: Initialize mem_limit to the real maximum address Arvind Sankar
2020-07-28 12:29   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-28 18:49   ` [PATCH v2 4/8] " Kees Cook
2020-08-06 23:39   ` [tip: x86/kaslr] " tip-bot2 for Arvind Sankar
2020-07-27 23:07 ` [PATCH v2 5/8] x86/kaslr: Simplify __process_mem_region Arvind Sankar
2020-07-28 19:25   ` Kees Cook
2020-07-28 19:42     ` Arvind Sankar
2020-07-27 23:07 ` [PATCH v2 6/8] x86/kaslr: Simplify process_gb_huge_pages Arvind Sankar
2020-07-28 19:27   ` Kees Cook
2020-07-28 19:45     ` Arvind Sankar
2020-07-28 20:13       ` Kees Cook
2020-07-27 23:08 ` [PATCH v2 7/8] x86/kaslr: Clean up slot handling Arvind Sankar
2020-07-28 19:34   ` Kees Cook
2020-07-28 20:04     ` Arvind Sankar
2020-07-27 23:08 ` [PATCH v2 8/8] x86/kaslr: Don't use 64-bit mem_vector for 32-bit kernel Arvind Sankar
2020-07-28 19:37   ` Kees Cook
2020-07-28 20:08     ` Arvind Sankar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.