linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage
@ 2022-12-15 12:37 Evgeniy Baskov
  2022-12-15 12:37 ` [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size Evgeniy Baskov
                   ` (26 more replies)
  0 siblings, 27 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

This patchset is aimed
* to improve UEFI compatibility of compressed kernel code for x86_64
* to setup proper memory access attributes for code and rodata sections
* to implement W^X protection policy throughout the whole execution 
  of compressed kernel for EFISTUB code path. 

Kernel is made to be more compatible with PE image specification [3],
allowing it to be successfully loaded by stricter PE loader
implementations like the one from [2]. There is at least one
known implementation that uses that loader in production [4].
There are also ongoing efforts to upstream these changes.

Also the patchset adds EFI_MEMORY_ATTTRIBUTE_PROTOCOL, included into
EFI specification since version 2.10, as a better alternative to
using DXE services for memory protection attributes manipulation,
since it is defined by the UEFI specification itself and not UEFI PI
specification. This protocol is not widely available so the code
using DXE services is kept in place as a fallback in case specific
implementation does not support the new protocol.
One of EFI implementations that already support
EFI_MEMORY_ATTTRIBUTE_PROTOCOL is Microsoft Project Mu [5].
 
Kernel image generation tool (tools/build.c) is refactored as a part
of changes that makes PE image more compatible.
   
The patchset implements memory protection for compressed kernel
code while executing both inside EFI boot services and outside of
them. For EFISTUB code path W^X protection policy is maintained
throughout the whole execution of compressed kernel. The latter
is achieved by extracting the kernel directly from EFI environment
and jumping to it's head immediately after exiting EFI boot services.
As a side effect of this change one page table rebuild and a copy of
the kernel image is removed.

Memory protection inside EFI environment is controlled by the
CONFIG_DXE_MEM_ATTRIBUTES option, although with these patches this
option also control the use EFI_MEMORY_ATTTRIBUTE_PROTOCOL and memory
protection attributes of PE sections and not only DXE services as the
name might suggest.

Changes in v2:
 * Fix spelling.
 * Rebase code to current master.
 * Split huge patches into smaller ones.
 * Remove unneeded forward declarations.
 * Make direct extraction unconditional.
   * Also make it work for x86_32.
   * Reduce lower limit of KASLR to 64M.
 * Make callback interface more logically consistent.
 * Actually declare callbacks structure before using it.
 * Mention effect on x86_32 in commit message of 
   "x86/build: Remove RWX sections and align on 4KB".
 * Clarify commit message of
   "x86/boot: Increase boot page table size".
 * Remove "startup32_" prefix on startup32_enable_nx_if_supported.
 * Move linker generated sections outside of function scope.
 * Drop some unintended changes.
 * Drop generating 2 reloc entries.
   (as I've misread the documentation and there's no need for this change.)
 * Set has_nx from enable_nx_if_supported correctly.
 * Move ELF header check to build time.
 * Set WP at the same time as PG in trampoline code,
   as it is more logically consistent.
 * Put x86-specific EFISTUB definitions in x86-stub.h header.
 * Catch presence of ELF segments violating W^X during build.
 * Move PE definitions from build.c to a new header file.
 * Fix generation of PE '.compat' section.

I decided to keep protection of compressed kernel blob and '.rodata'
separate from '.text' for now, since it does not really have a lot
of overhead.

Otherwise, all comments on v1 seems to be addressed. 

Changes in v3:
 * Setup IDT before issuing cpuid so that AMD SEV #VC handler is set.
 * Replace memcpy with strncpy to prevent out-of-bounds reads in tools/build.c.
 * Zero BSS before entering efi_main(), since it can contain garbage
   when booting via EFI handover protocol.
 * When booting via EFI don't require init_size of RAM, since in-place
   unpacking is not used anyway with that interface. This saves ~40M of memory
   for debian .config.
 * Setup sections memory protection in efi_main() to cover EFI handover protocol,
   where EFI sections are likely not properly protected.

Changes in v4:
 * Add one missing identity mapping.
 * Include following patches improving the use of DXE services:
     - efi/x86: don't try to set page attributes on 0-sized regions.
     - efi/x86: don't set unsupported memory attributes

Patch "x86/boot: Support 4KB pages for identity mapping" needs review
from x86/mm team.

I have also included Peter's patches [6-8] into the series for simplicity.

Many thanks to Ard Biesheuvel <ardb@kernel.org> and
Andrew Cooper <Andrew.Cooper3@citrix.com> for reviewing the patches, and to
Peter Jones <pjones@redhat.com>, Mario Limonciello <mario.limonciello@amd.com> and
Joey Lee <jlee@suse.com> for additional testing!

[1] https://lkml.org/lkml/2022/8/1/1314
[2] https://github.com/acidanthera/audk/tree/secure_pe
[3] https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
[4] https://www.ispras.ru/en/technologies/asperitas/
[5] https://github.com/microsoft/mu_tiano_platforms
[6] https://lkml.org/lkml/2022/10/18/1178
[7] https://lkml.org/lkml/2022/12/13/840
[8] https://lkml.org/lkml/2022/12/13/841

Evgeniy Baskov (23):
  x86/boot: Align vmlinuz sections on page size
  x86/build: Remove RWX sections and align on 4KB
  x86/boot: Set cr0 to known state in trampoline
  x86/boot: Increase boot page table size
  x86/boot: Support 4KB pages for identity mapping
  x86/boot: Setup memory protection for bzImage code
  x86/build: Check W^X of vmlinux during build
  x86/boot: Map memory explicitly
  x86/boot: Remove mapping from page fault handler
  efi/libstub: Move helper function to related file
  x86/boot: Make console interface more abstract
  x86/boot: Make kernel_add_identity_map() a pointer
  x86/boot: Split trampoline and pt init code
  x86/boot: Add EFI kernel extraction interface
  efi/x86: Support extracting kernel from libstub
  x86/boot: Reduce lower limit of physical KASLR
  x86/boot: Reduce size of the DOS stub
  tools/include: Add simplified version of pe.h
  x86/build: Cleanup tools/build.c
  x86/build: Make generated PE more spec compliant
  efi/x86: Explicitly set sections memory attributes
  efi/libstub: Add memory attribute protocol definitions
  efi/libstub: Use memory attribute protocol

Peter Jones (3):
  efi/libstub: make memory protection warnings include newlines.
  efi/x86: don't try to set page attributes on 0-sized regions.
  efi/x86: don't set unsupported memory attributes

 arch/x86/boot/Makefile                        |   2 +-
 arch/x86/boot/compressed/Makefile             |   8 +-
 arch/x86/boot/compressed/acpi.c               |  25 +-
 arch/x86/boot/compressed/efi.c                |  19 +-
 arch/x86/boot/compressed/head_32.S            |  53 +-
 arch/x86/boot/compressed/head_64.S            |  89 ++-
 arch/x86/boot/compressed/ident_map_64.c       | 122 ++--
 arch/x86/boot/compressed/kaslr.c              |   8 +-
 arch/x86/boot/compressed/misc.c               | 278 ++++-----
 arch/x86/boot/compressed/misc.h               |  23 +-
 arch/x86/boot/compressed/pgtable.h            |  20 -
 arch/x86/boot/compressed/pgtable_64.c         |  75 ++-
 arch/x86/boot/compressed/putstr.c             | 130 ++++
 arch/x86/boot/compressed/sev.c                |   6 +-
 arch/x86/boot/compressed/vmlinux.lds.S        |   6 +
 arch/x86/boot/header.S                        | 110 +---
 arch/x86/boot/tools/build.c                   | 569 +++++++++++-------
 arch/x86/include/asm/boot.h                   |  26 +-
 arch/x86/include/asm/efi.h                    |   7 +
 arch/x86/include/asm/init.h                   |   1 +
 arch/x86/include/asm/shared/extract.h         |  26 +
 arch/x86/include/asm/shared/pgtable.h         |  29 +
 arch/x86/kernel/vmlinux.lds.S                 |  15 +-
 arch/x86/mm/ident_map.c                       | 185 +++++-
 drivers/firmware/efi/Kconfig                  |   2 +
 drivers/firmware/efi/libstub/Makefile         |   2 +-
 drivers/firmware/efi/libstub/efistub.h        |  26 +
 drivers/firmware/efi/libstub/mem.c            | 194 ++++++
 .../firmware/efi/libstub/x86-extract-direct.c | 208 +++++++
 drivers/firmware/efi/libstub/x86-stub.c       | 231 ++-----
 drivers/firmware/efi/libstub/x86-stub.h       |  14 +
 include/linux/efi.h                           |   1 +
 tools/include/linux/pe.h                      | 150 +++++
 33 files changed, 1860 insertions(+), 800 deletions(-)
 delete mode 100644 arch/x86/boot/compressed/pgtable.h
 create mode 100644 arch/x86/boot/compressed/putstr.c
 create mode 100644 arch/x86/include/asm/shared/extract.h
 create mode 100644 arch/x86/include/asm/shared/pgtable.h
 create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
 create mode 100644 drivers/firmware/efi/libstub/x86-stub.h
 create mode 100644 tools/include/linux/pe.h

-- 
2.37.4


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-10 14:43   ` Ard Biesheuvel
  2022-12-15 12:37 ` [PATCH v4 02/26] x86/build: Remove RWX sections and align on 4KB Evgeniy Baskov
                   ` (25 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

To protect sections on page table level each section
needs to be aligned on page size (4KB).

Set sections alignment in linker script.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/vmlinux.lds.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/boot/compressed/vmlinux.lds.S b/arch/x86/boot/compressed/vmlinux.lds.S
index 112b2375d021..6be90f1a1198 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -27,21 +27,27 @@ SECTIONS
 		HEAD_TEXT
 		_ehead = . ;
 	}
+	. = ALIGN(PAGE_SIZE);
 	.rodata..compressed : {
+		_compressed = .;
 		*(.rodata..compressed)
+		_ecompressed = .;
 	}
+	. = ALIGN(PAGE_SIZE);
 	.text :	{
 		_text = .; 	/* Text */
 		*(.text)
 		*(.text.*)
 		_etext = . ;
 	}
+	. = ALIGN(PAGE_SIZE);
 	.rodata : {
 		_rodata = . ;
 		*(.rodata)	 /* read-only data */
 		*(.rodata.*)
 		_erodata = . ;
 	}
+	. = ALIGN(PAGE_SIZE);
 	.data :	{
 		_data = . ;
 		*(.data)
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 02/26] x86/build: Remove RWX sections and align on 4KB
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
  2022-12-15 12:37 ` [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-10 14:45   ` Ard Biesheuvel
  2022-12-15 12:37 ` [PATCH v4 03/26] x86/boot: Set cr0 to known state in trampoline Evgeniy Baskov
                   ` (24 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Avoid creating sections simultaneously writable and readable
to prepare for W^X implementation. Align sections on page size (4KB) to
allow protecting them in the page tables.

Split init code form ".init" segment into separate R_X ".inittext"
segment and make ".init" segment non-executable.

Also add these segments to x86_32 architecture for consistency.
Currently paging is disabled in x86_32 in compressed kernel, so
protection is not applied anyways, but .init code was incorrectly
placed in non-executable ".data" segment. This should not change
anything meaningful in memory layout now, but might be required in case
memory protection will also be implemented in compressed kernel for
x86_32.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/kernel/vmlinux.lds.S | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 2e0ee14229bf..2e56d694c491 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -102,12 +102,11 @@ jiffies = jiffies_64;
 PHDRS {
 	text PT_LOAD FLAGS(5);          /* R_E */
 	data PT_LOAD FLAGS(6);          /* RW_ */
-#ifdef CONFIG_X86_64
-#ifdef CONFIG_SMP
+#if defined(CONFIG_X86_64) && defined(CONFIG_SMP)
 	percpu PT_LOAD FLAGS(6);        /* RW_ */
 #endif
-	init PT_LOAD FLAGS(7);          /* RWE */
-#endif
+	inittext PT_LOAD FLAGS(5);      /* R_E */
+	init PT_LOAD FLAGS(6);          /* RW_ */
 	note PT_NOTE FLAGS(0);          /* ___ */
 }
 
@@ -227,9 +226,10 @@ SECTIONS
 #endif
 
 	INIT_TEXT_SECTION(PAGE_SIZE)
-#ifdef CONFIG_X86_64
-	:init
-#endif
+	:inittext
+
+	. = ALIGN(PAGE_SIZE);
+
 
 	/*
 	 * Section for code used exclusively before alternatives are run. All
@@ -241,6 +241,7 @@ SECTIONS
 	.altinstr_aux : AT(ADDR(.altinstr_aux) - LOAD_OFFSET) {
 		*(.altinstr_aux)
 	}
+	:init
 
 	INIT_DATA_SECTION(16)
 
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 03/26] x86/boot: Set cr0 to known state in trampoline
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
  2022-12-15 12:37 ` [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size Evgeniy Baskov
  2022-12-15 12:37 ` [PATCH v4 02/26] x86/build: Remove RWX sections and align on 4KB Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-10 14:48   ` Ard Biesheuvel
  2022-12-15 12:37 ` [PATCH v4 04/26] x86/boot: Increase boot page table size Evgeniy Baskov
                   ` (23 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Ensure WP bit to be set to prevent boot code from writing to
non-writable memory pages.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/head_64.S | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index a75712991df3..9f2e8f50fc71 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -660,9 +660,8 @@ SYM_CODE_START(trampoline_32bit_src)
 	pushl	$__KERNEL_CS
 	pushl	%eax
 
-	/* Enable paging again. */
-	movl	%cr0, %eax
-	btsl	$X86_CR0_PG_BIT, %eax
+	/* Enable paging and set CR0 to known state (this also sets WP flag) */
+	movl	$CR0_STATE, %eax
 	movl	%eax, %cr0
 
 	lret
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 04/26] x86/boot: Increase boot page table size
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (2 preceding siblings ...)
  2022-12-15 12:37 ` [PATCH v4 03/26] x86/boot: Set cr0 to known state in trampoline Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-08  9:24   ` Ard Biesheuvel
  2022-12-15 12:37 ` [PATCH v4 05/26] x86/boot: Support 4KB pages for identity mapping Evgeniy Baskov
                   ` (22 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Previous upper limit ignored pages implicitly mapped from #PF handler
by code accessing ACPI tables (boot/compressed/{acpi.c,efi.c}),
so theoretical upper limit is higher than it was set.

Using 4KB pages is desirable for better memory protection granularity.
Approximately twice as much memory is required for those.

Increase initial page table size to 64 4KB page tables.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/include/asm/boot.h | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
index 9191280d9ea3..024d972c248e 100644
--- a/arch/x86/include/asm/boot.h
+++ b/arch/x86/include/asm/boot.h
@@ -41,22 +41,24 @@
 # define BOOT_STACK_SIZE	0x4000
 
 # define BOOT_INIT_PGT_SIZE	(6*4096)
-# ifdef CONFIG_RANDOMIZE_BASE
 /*
  * Assuming all cross the 512GB boundary:
  * 1 page for level4
- * (2+2)*4 pages for kernel, param, cmd_line, and randomized kernel
- * 2 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
- * Total is 19 pages.
+ * (3+3)*2 pages for param and cmd_line
+ * (2+2+S)*2 pages for kernel and randomized kernel, where S is total number
+ *     of sections of kernel. Explanation: 2+2 are upper level page tables.
+ *     We can have only S unaligned parts of section: 1 at the end of the kernel
+ *     and (S-1) at the section borders. The start address of the kernel is
+ *     aligned, so an extra page table. There are at most S=6 sections in
+ *     vmlinux ELF image.
+ * 3 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
+ * Total is 36 pages.
+ *
+ * Some pages are also required for UEFI memory map and
+ * ACPI table mappings, so we need to add extra space.
+ * FIXME: Figure out exact amount of pages.
  */
-#  ifdef CONFIG_X86_VERBOSE_BOOTUP
-#   define BOOT_PGT_SIZE	(19*4096)
-#  else /* !CONFIG_X86_VERBOSE_BOOTUP */
-#   define BOOT_PGT_SIZE	(17*4096)
-#  endif
-# else /* !CONFIG_RANDOMIZE_BASE */
-#  define BOOT_PGT_SIZE		BOOT_INIT_PGT_SIZE
-# endif
+# define BOOT_PGT_SIZE		(64*4096)
 
 #else /* !CONFIG_X86_64 */
 # define BOOT_STACK_SIZE	0x1000
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 05/26] x86/boot: Support 4KB pages for identity mapping
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (3 preceding siblings ...)
  2022-12-15 12:37 ` [PATCH v4 04/26] x86/boot: Increase boot page table size Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-08  9:42   ` Ard Biesheuvel
  2022-12-15 12:37 ` [PATCH v4 06/26] x86/boot: Setup memory protection for bzImage code Evgeniy Baskov
                   ` (21 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Current identity mapping code only supports 2M and 1G pages.
4KB pages are desirable for better memory protection granularity
in compressed kernel code.

Change identity mapping code to support 4KB pages and
memory remapping with different attributes.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/include/asm/init.h |   1 +
 arch/x86/mm/ident_map.c     | 185 +++++++++++++++++++++++++++++-------
 2 files changed, 154 insertions(+), 32 deletions(-)

diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index 5f1d3c421f68..a8277ee82c51 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -8,6 +8,7 @@ struct x86_mapping_info {
 	unsigned long page_flag;	 /* page flag for PMD or PUD entry */
 	unsigned long offset;		 /* ident mapping offset */
 	bool direct_gbpages;		 /* PUD level 1GB page support */
+	bool allow_4kpages;		 /* Allow more granular mappings with 4K pages */
 	unsigned long kernpg_flag;	 /* kernel pagetable flag override */
 };
 
diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 968d7005f4a7..662e794a325d 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -4,24 +4,127 @@
  * included by both the compressed kernel and the regular kernel.
  */
 
-static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page,
-			   unsigned long addr, unsigned long end)
+static void ident_pte_init(struct x86_mapping_info *info, pte_t *pte_page,
+			   unsigned long addr, unsigned long end,
+			   unsigned long flags)
 {
-	addr &= PMD_MASK;
-	for (; addr < end; addr += PMD_SIZE) {
+	addr &= PAGE_MASK;
+	for (; addr < end; addr += PAGE_SIZE) {
+		pte_t *pte = pte_page + pte_index(addr);
+
+		set_pte(pte, __pte((addr - info->offset) | flags));
+	}
+}
+
+pte_t *ident_split_large_pmd(struct x86_mapping_info *info,
+			     pmd_t *pmdp, unsigned long page_addr)
+{
+	unsigned long pmd_addr, page_flags;
+	pte_t *pte;
+
+	pte = (pte_t *)info->alloc_pgt_page(info->context);
+	if (!pte)
+		return NULL;
+
+	pmd_addr = page_addr & PMD_MASK;
+
+	/* Not a large page - clear PSE flag */
+	page_flags = pmd_flags(*pmdp) & ~_PSE;
+	ident_pte_init(info, pte, pmd_addr, pmd_addr + PMD_SIZE, page_flags);
+
+	return pte;
+}
+
+static int ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page,
+			  unsigned long addr, unsigned long end,
+			  unsigned long flags)
+{
+	unsigned long next;
+	bool new_table = 0;
+
+	for (; addr < end; addr = next) {
 		pmd_t *pmd = pmd_page + pmd_index(addr);
+		pte_t *pte;
 
-		if (pmd_present(*pmd))
+		next = (addr & PMD_MASK) + PMD_SIZE;
+		if (next > end)
+			next = end;
+
+		/*
+		 * Use 2M pages if 4k pages are not allowed or
+		 * we are not mapping extra, i.e. address and size are aligned.
+		 */
+
+		if (!info->allow_4kpages ||
+		    (!(addr & ~PMD_MASK) && next == addr + PMD_SIZE)) {
+
+			pmd_t pmdval;
+
+			addr &= PMD_MASK;
+			pmdval = __pmd((addr - info->offset) | flags | _PSE);
+			set_pmd(pmd, pmdval);
 			continue;
+		}
+
+		/*
+		 * If currently mapped page is large, we need to split it.
+		 * The case when we don't can remap 2M page to 2M page
+		 * with different flags is already covered above.
+		 *
+		 * If there's nothing mapped to desired address,
+		 * we need to allocate new page table.
+		 */
 
-		set_pmd(pmd, __pmd((addr - info->offset) | info->page_flag));
+		if (pmd_large(*pmd)) {
+			pte = ident_split_large_pmd(info, pmd, addr);
+			new_table = 1;
+		} else if (!pmd_present(*pmd)) {
+			pte = (pte_t *)info->alloc_pgt_page(info->context);
+			new_table = 1;
+		} else {
+			pte = pte_offset_kernel(pmd, 0);
+			new_table = 0;
+		}
+
+		if (!pte)
+			return -ENOMEM;
+
+		ident_pte_init(info, pte, addr, next, flags);
+
+		if (new_table)
+			set_pmd(pmd, __pmd(__pa(pte) | info->kernpg_flag));
 	}
+
+	return 0;
 }
 
+
+pmd_t *ident_split_large_pud(struct x86_mapping_info *info,
+			     pud_t *pudp, unsigned long page_addr)
+{
+	unsigned long pud_addr, page_flags;
+	pmd_t *pmd;
+
+	pmd = (pmd_t *)info->alloc_pgt_page(info->context);
+	if (!pmd)
+		return NULL;
+
+	pud_addr = page_addr & PUD_MASK;
+
+	/* Not a large page - clear PSE flag */
+	page_flags = pud_flags(*pudp) & ~_PSE;
+	ident_pmd_init(info, pmd, pud_addr, pud_addr + PUD_SIZE, page_flags);
+
+	return pmd;
+}
+
+
 static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 			  unsigned long addr, unsigned long end)
 {
 	unsigned long next;
+	bool new_table = 0;
+	int result;
 
 	for (; addr < end; addr = next) {
 		pud_t *pud = pud_page + pud_index(addr);
@@ -31,28 +134,39 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 		if (next > end)
 			next = end;
 
+		/* Use 1G pages only if forced, even if they are supported. */
 		if (info->direct_gbpages) {
 			pud_t pudval;
-
-			if (pud_present(*pud))
-				continue;
+			unsigned long flags;
 
 			addr &= PUD_MASK;
-			pudval = __pud((addr - info->offset) | info->page_flag);
+			flags = info->page_flag | _PSE;
+			pudval = __pud((addr - info->offset) | flags);
+
 			set_pud(pud, pudval);
 			continue;
 		}
 
-		if (pud_present(*pud)) {
+		if (pud_large(*pud)) {
+			pmd = ident_split_large_pud(info, pud, addr);
+			new_table = 1;
+		} else if (!pud_present(*pud)) {
+			pmd = (pmd_t *)info->alloc_pgt_page(info->context);
+			new_table = 1;
+		} else {
 			pmd = pmd_offset(pud, 0);
-			ident_pmd_init(info, pmd, addr, next);
-			continue;
+			new_table = 0;
 		}
-		pmd = (pmd_t *)info->alloc_pgt_page(info->context);
+
 		if (!pmd)
 			return -ENOMEM;
-		ident_pmd_init(info, pmd, addr, next);
-		set_pud(pud, __pud(__pa(pmd) | info->kernpg_flag));
+
+		result = ident_pmd_init(info, pmd, addr, next, info->page_flag);
+		if (result)
+			return result;
+
+		if (new_table)
+			set_pud(pud, __pud(__pa(pmd) | info->kernpg_flag));
 	}
 
 	return 0;
@@ -63,6 +177,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 {
 	unsigned long next;
 	int result;
+	bool new_table = 0;
 
 	for (; addr < end; addr = next) {
 		p4d_t *p4d = p4d_page + p4d_index(addr);
@@ -72,15 +187,14 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 		if (next > end)
 			next = end;
 
-		if (p4d_present(*p4d)) {
+		if (!p4d_present(*p4d)) {
+			pud = (pud_t *)info->alloc_pgt_page(info->context);
+			new_table = 1;
+		} else {
 			pud = pud_offset(p4d, 0);
-			result = ident_pud_init(info, pud, addr, next);
-			if (result)
-				return result;
-
-			continue;
+			new_table = 0;
 		}
-		pud = (pud_t *)info->alloc_pgt_page(info->context);
+
 		if (!pud)
 			return -ENOMEM;
 
@@ -88,19 +202,22 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
 		if (result)
 			return result;
 
-		set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
+		if (new_table)
+			set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
 	}
 
 	return 0;
 }
 
-int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
-			      unsigned long pstart, unsigned long pend)
+int kernel_ident_mapping_init(struct x86_mapping_info *info,
+			      pgd_t *pgd_page, unsigned long pstart,
+			      unsigned long pend)
 {
 	unsigned long addr = pstart + info->offset;
 	unsigned long end = pend + info->offset;
 	unsigned long next;
 	int result;
+	bool new_table;
 
 	/* Set the default pagetable flags if not supplied */
 	if (!info->kernpg_flag)
@@ -117,20 +234,24 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
 		if (next > end)
 			next = end;
 
-		if (pgd_present(*pgd)) {
+		if (!pgd_present(*pgd)) {
+			p4d = (p4d_t *)info->alloc_pgt_page(info->context);
+			new_table = 1;
+		} else {
 			p4d = p4d_offset(pgd, 0);
-			result = ident_p4d_init(info, p4d, addr, next);
-			if (result)
-				return result;
-			continue;
+			new_table = 0;
 		}
 
-		p4d = (p4d_t *)info->alloc_pgt_page(info->context);
 		if (!p4d)
 			return -ENOMEM;
+
 		result = ident_p4d_init(info, p4d, addr, next);
 		if (result)
 			return result;
+
+		if (!new_table)
+			continue;
+
 		if (pgtable_l5_enabled()) {
 			set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag));
 		} else {
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 06/26] x86/boot: Setup memory protection for bzImage code
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (4 preceding siblings ...)
  2022-12-15 12:37 ` [PATCH v4 05/26] x86/boot: Support 4KB pages for identity mapping Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-08 10:47   ` Ard Biesheuvel
  2022-12-15 12:37 ` [PATCH v4 07/26] x86/build: Check W^X of vmlinux during build Evgeniy Baskov
                   ` (20 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Use previously added code to use 4KB pages for mapping. Map compressed
and uncompressed kernel with appropriate memory protection attributes.
For compressed kernel set them up manually. For uncompressed kernel
used flags specified in ELF header.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

 delete mode 100644 arch/x86/boot/compressed/pgtable.h
 create mode 100644 arch/x86/include/asm/shared/pgtable.h
---
 arch/x86/boot/compressed/head_64.S      | 24 +++++-
 arch/x86/boot/compressed/ident_map_64.c | 97 ++++++++++++++++---------
 arch/x86/boot/compressed/misc.c         | 63 ++++++++++++++--
 arch/x86/boot/compressed/misc.h         | 22 +++++-
 arch/x86/boot/compressed/pgtable.h      | 20 -----
 arch/x86/boot/compressed/pgtable_64.c   |  2 +-
 arch/x86/boot/compressed/sev.c          |  6 +-
 arch/x86/include/asm/shared/pgtable.h   | 29 ++++++++
 8 files changed, 197 insertions(+), 66 deletions(-)
 delete mode 100644 arch/x86/boot/compressed/pgtable.h
 create mode 100644 arch/x86/include/asm/shared/pgtable.h

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 9f2e8f50fc71..8b9c4fe17126 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -29,13 +29,14 @@
 #include <linux/linkage.h>
 #include <asm/segment.h>
 #include <asm/boot.h>
+#include <asm/cpufeatures.h>
 #include <asm/msr.h>
 #include <asm/processor-flags.h>
 #include <asm/asm-offsets.h>
 #include <asm/bootparam.h>
 #include <asm/desc_defs.h>
 #include <asm/trapnr.h>
-#include "pgtable.h"
+#include <asm/shared/pgtable.h>
 
 /*
  * Fix alignment at 16 bytes. Following CONFIG_FUNCTION_ALIGNMENT will result
@@ -554,6 +555,7 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
 	pushq	%rsi
 	call	load_stage2_idt
 
+	call	enable_nx_if_supported
 	/* Pass boot_params to initialize_identity_maps() */
 	movq	(%rsp), %rdi
 	call	initialize_identity_maps
@@ -578,6 +580,26 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
 	jmp	*%rax
 SYM_FUNC_END(.Lrelocated)
 
+SYM_FUNC_START_LOCAL_NOALIGN(enable_nx_if_supported)
+	pushq	%rbx
+
+	mov	$0x80000001, %eax
+	cpuid
+	btl	$(X86_FEATURE_NX & 31), %edx
+	jnc	.Lnonx
+
+	movl	$MSR_EFER, %ecx
+	rdmsr
+	btsl	$_EFER_NX, %eax
+	wrmsr
+
+	movb	$1, has_nx(%rip)
+
+.Lnonx:
+	popq	%rbx
+	RET
+SYM_FUNC_END(enable_nx_if_supported)
+
 	.code32
 /*
  * This is the 32-bit trampoline that will be copied over to low memory.
diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index d4a314cc50d6..fec795a4ce23 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -28,6 +28,7 @@
 #include <asm/trap_pf.h>
 #include <asm/trapnr.h>
 #include <asm/init.h>
+#include <asm/shared/pgtable.h>
 /* Use the static base for this part of the boot process */
 #undef __PAGE_OFFSET
 #define __PAGE_OFFSET __PAGE_OFFSET_BASE
@@ -86,24 +87,52 @@ phys_addr_t physical_mask = (1ULL << __PHYSICAL_MASK_SHIFT) - 1;
  * Due to relocation, pointers must be assigned at run time not build time.
  */
 static struct x86_mapping_info mapping_info;
+bool has_nx; /* set in head_64.S */
 
 /*
  * Adds the specified range to the identity mappings.
  */
-void kernel_add_identity_map(unsigned long start, unsigned long end)
+unsigned long kernel_add_identity_map(unsigned long start,
+				      unsigned long end,
+				      unsigned int flags)
 {
 	int ret;
 
 	/* Align boundary to 2M. */
-	start = round_down(start, PMD_SIZE);
-	end = round_up(end, PMD_SIZE);
+	start = round_down(start, PAGE_SIZE);
+	end = round_up(end, PAGE_SIZE);
 	if (start >= end)
-		return;
+		return start;
+
+	/*
+	 * Warn if W^X is violated.
+	 * Only do that if CONFIG_RANDOMIZE_BASE is set, since otherwise we need
+	 * to create RWX region in case of overlapping memory regions for
+	 * compressed and uncompressed kernel.
+	 */
+
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) &&
+	    (flags & (MAP_EXEC | MAP_WRITE)) == (MAP_EXEC | MAP_WRITE))
+		warn("W^X violation\n");
+
+	bool nx = !(flags & MAP_EXEC) && has_nx;
+	bool ro = !(flags & MAP_WRITE);
+
+	mapping_info.page_flag = sme_me_mask | (nx ?
+		(ro ? __PAGE_KERNEL_RO : __PAGE_KERNEL) :
+		(ro ? __PAGE_KERNEL_ROX : __PAGE_KERNEL_EXEC));
 
 	/* Build the mapping. */
-	ret = kernel_ident_mapping_init(&mapping_info, (pgd_t *)top_level_pgt, start, end);
+	ret = kernel_ident_mapping_init(&mapping_info,
+					(pgd_t *)top_level_pgt,
+					start, end);
 	if (ret)
 		error("Error: kernel_ident_mapping_init() failed\n");
+
+	if (!(flags & MAP_NOFLUSH))
+		write_cr3(top_level_pgt);
+
+	return start;
 }
 
 /* Locates and clears a region for a new top level page table. */
@@ -112,14 +141,17 @@ void initialize_identity_maps(void *rmode)
 	unsigned long cmdline;
 	struct setup_data *sd;
 
+	boot_params = rmode;
+
 	/* Exclude the encryption mask from __PHYSICAL_MASK */
 	physical_mask &= ~sme_me_mask;
 
 	/* Init mapping_info with run-time function/buffer pointers. */
 	mapping_info.alloc_pgt_page = alloc_pgt_page;
 	mapping_info.context = &pgt_data;
-	mapping_info.page_flag = __PAGE_KERNEL_LARGE_EXEC | sme_me_mask;
+	mapping_info.page_flag = __PAGE_KERNEL_EXEC | sme_me_mask;
 	mapping_info.kernpg_flag = _KERNPG_TABLE;
+	mapping_info.allow_4kpages = 1;
 
 	/*
 	 * It should be impossible for this not to already be true,
@@ -154,15 +186,29 @@ void initialize_identity_maps(void *rmode)
 	/*
 	 * New page-table is set up - map the kernel image, boot_params and the
 	 * command line. The uncompressed kernel requires boot_params and the
-	 * command line to be mapped in the identity mapping. Map them
-	 * explicitly here in case the compressed kernel does not touch them,
-	 * or does not touch all the pages covering them.
+	 * command line to be mapped in the identity mapping.
+	 * Every other accessed memory region is mapped later, if required.
 	 */
-	kernel_add_identity_map((unsigned long)_head, (unsigned long)_end);
-	boot_params = rmode;
-	kernel_add_identity_map((unsigned long)boot_params, (unsigned long)(boot_params + 1));
+	kernel_add_identity_map((unsigned long)_head,
+				(unsigned long)_ehead, MAP_EXEC | MAP_NOFLUSH);
+
+	kernel_add_identity_map((unsigned long)_compressed,
+				(unsigned long)_ecompressed, MAP_WRITE | MAP_NOFLUSH);
+
+	kernel_add_identity_map((unsigned long)_text,
+				(unsigned long)_etext, MAP_EXEC | MAP_NOFLUSH);
+
+	kernel_add_identity_map((unsigned long)_rodata,
+				(unsigned long)_erodata, MAP_NOFLUSH);
+
+	kernel_add_identity_map((unsigned long)_data,
+				(unsigned long)_end, MAP_WRITE | MAP_NOFLUSH);
+
+	kernel_add_identity_map((unsigned long)boot_params,
+				(unsigned long)(boot_params + 1), MAP_WRITE | MAP_NOFLUSH);
+
 	cmdline = get_cmd_line_ptr();
-	kernel_add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);
+	kernel_add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE, MAP_NOFLUSH);
 
 	/*
 	 * Also map the setup_data entries passed via boot_params in case they
@@ -172,7 +218,7 @@ void initialize_identity_maps(void *rmode)
 	while (sd) {
 		unsigned long sd_addr = (unsigned long)sd;
 
-		kernel_add_identity_map(sd_addr, sd_addr + sizeof(*sd) + sd->len);
+		kernel_add_identity_map(sd_addr, sd_addr + sizeof(*sd) + sd->len, MAP_NOFLUSH);
 		sd = (struct setup_data *)sd->next;
 	}
 
@@ -185,26 +231,11 @@ void initialize_identity_maps(void *rmode)
 static pte_t *split_large_pmd(struct x86_mapping_info *info,
 			      pmd_t *pmdp, unsigned long __address)
 {
-	unsigned long page_flags;
-	unsigned long address;
-	pte_t *pte;
-	pmd_t pmd;
-	int i;
-
-	pte = (pte_t *)info->alloc_pgt_page(info->context);
+	unsigned long address = __address & PMD_MASK;
+	pte_t *pte = ident_split_large_pmd(info, pmdp, address);
 	if (!pte)
 		return NULL;
 
-	address     = __address & PMD_MASK;
-	/* No large page - clear PSE flag */
-	page_flags  = info->page_flag & ~_PAGE_PSE;
-
-	/* Populate the PTEs */
-	for (i = 0; i < PTRS_PER_PMD; i++) {
-		set_pte(&pte[i], __pte(address | page_flags));
-		address += PAGE_SIZE;
-	}
-
 	/*
 	 * Ideally we need to clear the large PMD first and do a TLB
 	 * flush before we write the new PMD. But the 2M range of the
@@ -214,7 +245,7 @@ static pte_t *split_large_pmd(struct x86_mapping_info *info,
 	 * also the only user of the page-table, so there is no chance
 	 * of a TLB multihit.
 	 */
-	pmd = __pmd((unsigned long)pte | info->kernpg_flag);
+	pmd_t pmd = __pmd((unsigned long)pte | info->kernpg_flag);
 	set_pmd(pmdp, pmd);
 	/* Flush TLB to establish the new PMD */
 	write_cr3(top_level_pgt);
@@ -377,5 +408,5 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code)
 	 * Error code is sane - now identity map the 2M region around
 	 * the faulting address.
 	 */
-	kernel_add_identity_map(address, end);
+	kernel_add_identity_map(address, end, MAP_WRITE);
 }
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index cf690d8712f4..0c7ec290044d 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -14,10 +14,10 @@
 
 #include "misc.h"
 #include "error.h"
-#include "pgtable.h"
 #include "../string.h"
 #include "../voffset.h"
 #include <asm/bootparam_utils.h>
+#include <asm/shared/pgtable.h>
 
 /*
  * WARNING!!
@@ -277,7 +277,8 @@ static inline void handle_relocations(void *output, unsigned long output_len,
 { }
 #endif
 
-static void parse_elf(void *output)
+static void parse_elf(void *output, unsigned long output_len,
+		      unsigned long virt_addr)
 {
 #ifdef CONFIG_X86_64
 	Elf64_Ehdr ehdr;
@@ -287,6 +288,7 @@ static void parse_elf(void *output)
 	Elf32_Phdr *phdrs, *phdr;
 #endif
 	void *dest;
+	unsigned long addr;
 	int i;
 
 	memcpy(&ehdr, output, sizeof(ehdr));
@@ -323,10 +325,49 @@ static void parse_elf(void *output)
 #endif
 			memmove(dest, output + phdr->p_offset, phdr->p_filesz);
 			break;
-		default: /* Ignore other PT_* */ break;
+		default:
+			/* Ignore other PT_* */
+			break;
+		}
+	}
+
+	handle_relocations(output, output_len, virt_addr);
+
+	if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+		goto skip_protect;
+
+	for (i = 0; i < ehdr.e_phnum; i++) {
+		phdr = &phdrs[i];
+
+		switch (phdr->p_type) {
+		case PT_LOAD:
+#ifdef CONFIG_RELOCATABLE
+			addr = (unsigned long)output;
+			addr += (phdr->p_paddr - LOAD_PHYSICAL_ADDR);
+#else
+			addr = phdr->p_paddr;
+#endif
+			/*
+			 * Simultaneously readable and writable segments are
+			 * violating W^X, and should not be present in vmlinux image.
+			 * The absence of such segments is checked during build.
+			 */
+
+			unsigned int flags = MAP_PROTECT;
+			if (phdr->p_flags & PF_X)
+				flags |= MAP_EXEC;
+			if (phdr->p_flags & PF_W)
+				flags |= MAP_WRITE;
+
+			kernel_add_identity_map(addr, addr + phdr->p_memsz, flags);
+			break;
+		default:
+			/* Ignore other PT_* */
+			break;
 		}
 	}
 
+skip_protect:
 	free(phdrs);
 }
 
@@ -434,6 +475,19 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 				needed_size,
 				&virt_addr);
 
+	unsigned long phys_addr = (unsigned long)output;
+
+	/*
+	 * If KASLR is disabled input and output regions may overlap.
+	 * In this case we need to map region excutable as well.
+	 */
+	unsigned long map_flags = MAP_ALLOC | MAP_WRITE |
+			(IS_ENABLED(CONFIG_RANDOMIZE_BASE) ? 0 : MAP_EXEC);
+	phys_addr = kernel_add_identity_map(phys_addr,
+					    phys_addr + needed_size,
+					    map_flags);
+	output = (unsigned char *)phys_addr;
+
 	/* Validate memory location choices. */
 	if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
 		error("Destination physical address inappropriately aligned");
@@ -456,8 +510,7 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	debug_putstr("\nDecompressing Linux... ");
 	__decompress(input_data, input_len, NULL, NULL, output, output_len,
 			NULL, error);
-	parse_elf(output);
-	handle_relocations(output, output_len, virt_addr);
+	parse_elf(output, output_len, virt_addr);
 	debug_putstr("done.\nBooting the kernel.\n");
 
 	/* Disable exception handling before booting the kernel */
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 62208ec04ca4..033db9b536e6 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -44,8 +44,12 @@
 #define memptr unsigned
 #endif
 
-/* boot/compressed/vmlinux start and end markers */
-extern char _head[], _end[];
+/* Compressed kernel section start/end markers. */
+extern char _head[], _ehead[];
+extern char _compressed[], _ecompressed[];
+extern char _text[], _etext[];
+extern char _rodata[], _erodata[];
+extern char _data[], _end[];
 
 /* misc.c */
 extern memptr free_mem_ptr;
@@ -171,8 +175,18 @@ static inline int count_immovable_mem_regions(void) { return 0; }
 #ifdef CONFIG_X86_5LEVEL
 extern unsigned int __pgtable_l5_enabled, pgdir_shift, ptrs_per_p4d;
 #endif
-extern void kernel_add_identity_map(unsigned long start, unsigned long end);
-
+#ifdef CONFIG_X86_64
+extern unsigned long kernel_add_identity_map(unsigned long start,
+					     unsigned long end,
+					     unsigned int flags);
+#else
+static inline unsigned long kernel_add_identity_map(unsigned long start,
+						    unsigned long end,
+						    unsigned int flags)
+{
+	return start;
+}
+#endif
 /* Used by PAGE_KERN* macros: */
 extern pteval_t __default_kernel_pte_mask;
 
diff --git a/arch/x86/boot/compressed/pgtable.h b/arch/x86/boot/compressed/pgtable.h
deleted file mode 100644
index cc9b2529a086..000000000000
--- a/arch/x86/boot/compressed/pgtable.h
+++ /dev/null
@@ -1,20 +0,0 @@
-#ifndef BOOT_COMPRESSED_PAGETABLE_H
-#define BOOT_COMPRESSED_PAGETABLE_H
-
-#define TRAMPOLINE_32BIT_SIZE		(2 * PAGE_SIZE)
-
-#define TRAMPOLINE_32BIT_PGTABLE_OFFSET	0
-
-#define TRAMPOLINE_32BIT_CODE_OFFSET	PAGE_SIZE
-#define TRAMPOLINE_32BIT_CODE_SIZE	0x80
-
-#define TRAMPOLINE_32BIT_STACK_END	TRAMPOLINE_32BIT_SIZE
-
-#ifndef __ASSEMBLER__
-
-extern unsigned long *trampoline_32bit;
-
-extern void trampoline_32bit_src(void *return_ptr);
-
-#endif /* __ASSEMBLER__ */
-#endif /* BOOT_COMPRESSED_PAGETABLE_H */
diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
index 2ac12ff4111b..c7cf5a1059a8 100644
--- a/arch/x86/boot/compressed/pgtable_64.c
+++ b/arch/x86/boot/compressed/pgtable_64.c
@@ -2,7 +2,7 @@
 #include "misc.h"
 #include <asm/e820/types.h>
 #include <asm/processor.h>
-#include "pgtable.h"
+#include <asm/shared/pgtable.h>
 #include "../string.h"
 #include "efi.h"
 
diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
index c93930d5ccbd..99f3ad0b30f3 100644
--- a/arch/x86/boot/compressed/sev.c
+++ b/arch/x86/boot/compressed/sev.c
@@ -13,6 +13,7 @@
 #include "misc.h"
 
 #include <asm/pgtable_types.h>
+#include <asm/shared/pgtable.h>
 #include <asm/sev.h>
 #include <asm/trapnr.h>
 #include <asm/trap_pf.h>
@@ -435,10 +436,11 @@ void sev_prep_identity_maps(unsigned long top_level_pgt)
 		unsigned long cc_info_pa = boot_params->cc_blob_address;
 		struct cc_blob_sev_info *cc_info;
 
-		kernel_add_identity_map(cc_info_pa, cc_info_pa + sizeof(*cc_info));
+		kernel_add_identity_map(cc_info_pa, cc_info_pa + sizeof(*cc_info), MAP_NOFLUSH);
 
 		cc_info = (struct cc_blob_sev_info *)cc_info_pa;
-		kernel_add_identity_map(cc_info->cpuid_phys, cc_info->cpuid_phys + cc_info->cpuid_len);
+		kernel_add_identity_map(cc_info->cpuid_phys,
+					cc_info->cpuid_phys + cc_info->cpuid_len, MAP_NOFLUSH);
 	}
 
 	sev_verify_cbit(top_level_pgt);
diff --git a/arch/x86/include/asm/shared/pgtable.h b/arch/x86/include/asm/shared/pgtable.h
new file mode 100644
index 000000000000..6527dadf39d6
--- /dev/null
+++ b/arch/x86/include/asm/shared/pgtable.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef ASM_SHARED_PAGETABLE_H
+#define ASM_SHARED_PAGETABLE_H
+
+#define MAP_WRITE	0x02 /* Writable memory */
+#define MAP_EXEC	0x04 /* Executable memory */
+#define MAP_ALLOC	0x10 /* Range needs to be allocated */
+#define MAP_PROTECT	0x20 /* Set exact memory attributes for memory range */
+#define MAP_NOFLUSH	0x40 /* Avoid flushing TLB */
+
+#define TRAMPOLINE_32BIT_SIZE		(3 * PAGE_SIZE)
+
+#define TRAMPOLINE_32BIT_PLACEMENT_MAX	(0xA0000)
+
+#define TRAMPOLINE_32BIT_PGTABLE_OFFSET	0
+
+#define TRAMPOLINE_32BIT_CODE_OFFSET	PAGE_SIZE
+#define TRAMPOLINE_32BIT_CODE_SIZE	0x80
+
+#define TRAMPOLINE_32BIT_STACK_END	TRAMPOLINE_32BIT_SIZE
+
+#ifndef __ASSEMBLER__
+
+extern unsigned long *trampoline_32bit;
+
+extern void trampoline_32bit_src(void *return_ptr);
+
+#endif /* __ASSEMBLER__ */
+#endif /* ASM_SHARED_PAGETABLE_H */
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 07/26] x86/build: Check W^X of vmlinux during build
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (5 preceding siblings ...)
  2022-12-15 12:37 ` [PATCH v4 06/26] x86/boot: Setup memory protection for bzImage code Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-08  9:34   ` Ard Biesheuvel
  2022-12-15 12:37 ` [PATCH v4 08/26] x86/boot: Map memory explicitly Evgeniy Baskov
                   ` (19 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Check if there are simultaneously writable and executable
program segments in vmlinux ELF image and fail build if there are any.

This would prevent accidental introduction of RWX segments.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/Makefile | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 1acff356d97a..4dcab38f5a38 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -112,11 +112,17 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
 
+quiet_cmd_wx_check = WXCHK   $<
+cmd_wx_check = if $(OBJDUMP) -p $< | grep "flags .wx" > /dev/null; \
+	       then (echo >&2 "$<: Simultaneously writable and executable sections are prohibited"; \
+		     /bin/false); fi
+
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 	$(call if_changed,ld)
 
 OBJCOPYFLAGS_vmlinux.bin :=  -R .comment -S
 $(obj)/vmlinux.bin: vmlinux FORCE
+	$(call cmd,wx_check)
 	$(call if_changed,objcopy)
 
 targets += $(patsubst $(obj)/%,%,$(vmlinux-objs-y)) vmlinux.bin.all vmlinux.relocs
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 08/26] x86/boot: Map memory explicitly
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (6 preceding siblings ...)
  2022-12-15 12:37 ` [PATCH v4 07/26] x86/build: Check W^X of vmlinux during build Evgeniy Baskov
@ 2022-12-15 12:37 ` Evgeniy Baskov
  2023-03-08  9:38   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 09/26] x86/boot: Remove mapping from page fault handler Evgeniy Baskov
                   ` (18 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Implicit mappings hide possible memory errors, e.g. allocations for
ACPI tables were not included in boot page table size.

Replace all implicit mappings from page fault handler with
explicit mappings.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/acpi.c  | 25 ++++++++++++++++++++++++-
 arch/x86/boot/compressed/efi.c   | 19 ++++++++++++++++++-
 arch/x86/boot/compressed/kaslr.c |  4 ++++
 3 files changed, 46 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/acpi.c b/arch/x86/boot/compressed/acpi.c
index 9caf89063e77..c775e01fc7db 100644
--- a/arch/x86/boot/compressed/acpi.c
+++ b/arch/x86/boot/compressed/acpi.c
@@ -93,6 +93,8 @@ static u8 *scan_mem_for_rsdp(u8 *start, u32 length)
 
 	end = start + length;
 
+	kernel_add_identity_map((unsigned long)start, (unsigned long)end, 0);
+
 	/* Search from given start address for the requested length */
 	for (address = start; address < end; address += ACPI_RSDP_SCAN_STEP) {
 		/*
@@ -128,6 +130,9 @@ static acpi_physical_address bios_get_rsdp_addr(void)
 	unsigned long address;
 	u8 *rsdp;
 
+	kernel_add_identity_map((unsigned long)ACPI_EBDA_PTR_LOCATION,
+				(unsigned long)ACPI_EBDA_PTR_LOCATION + 2, 0);
+
 	/* Get the location of the Extended BIOS Data Area (EBDA) */
 	address = *(u16 *)ACPI_EBDA_PTR_LOCATION;
 	address <<= 4;
@@ -215,6 +220,9 @@ static unsigned long get_acpi_srat_table(void)
 	if (!rsdp)
 		return 0;
 
+	kernel_add_identity_map((unsigned long)rsdp,
+				(unsigned long)(rsdp + 1), 0);
+
 	/* Get ACPI root table from RSDP.*/
 	if (!(cmdline_find_option("acpi", arg, sizeof(arg)) == 4 &&
 	    !strncmp(arg, "rsdt", 4)) &&
@@ -231,10 +239,17 @@ static unsigned long get_acpi_srat_table(void)
 		return 0;
 
 	header = (struct acpi_table_header *)root_table;
+
+	kernel_add_identity_map((unsigned long)header,
+				(unsigned long)(header + 1), 0);
+
 	len = header->length;
 	if (len < sizeof(struct acpi_table_header) + size)
 		return 0;
 
+	kernel_add_identity_map((unsigned long)header,
+				(unsigned long)header + len, 0);
+
 	num_entries = (len - sizeof(struct acpi_table_header)) / size;
 	entry = (u8 *)(root_table + sizeof(struct acpi_table_header));
 
@@ -247,8 +262,16 @@ static unsigned long get_acpi_srat_table(void)
 		if (acpi_table) {
 			header = (struct acpi_table_header *)acpi_table;
 
-			if (ACPI_COMPARE_NAMESEG(header->signature, ACPI_SIG_SRAT))
+			kernel_add_identity_map(acpi_table,
+						acpi_table + sizeof(*header),
+						0);
+
+			if (ACPI_COMPARE_NAMESEG(header->signature, ACPI_SIG_SRAT)) {
+				kernel_add_identity_map(acpi_table,
+							acpi_table + header->length,
+							0);
 				return acpi_table;
+			}
 		}
 		entry += size;
 	}
diff --git a/arch/x86/boot/compressed/efi.c b/arch/x86/boot/compressed/efi.c
index 6edd034b0b30..ce70103fbbc0 100644
--- a/arch/x86/boot/compressed/efi.c
+++ b/arch/x86/boot/compressed/efi.c
@@ -57,10 +57,14 @@ enum efi_type efi_get_type(struct boot_params *bp)
  */
 unsigned long efi_get_system_table(struct boot_params *bp)
 {
-	unsigned long sys_tbl_pa;
+	static unsigned long sys_tbl_pa __section(".data");
 	struct efi_info *ei;
+	unsigned long sys_tbl_size;
 	enum efi_type et;
 
+	if (sys_tbl_pa)
+		return sys_tbl_pa;
+
 	/* Get systab from boot params. */
 	ei = &bp->efi_info;
 #ifdef CONFIG_X86_64
@@ -73,6 +77,13 @@ unsigned long efi_get_system_table(struct boot_params *bp)
 		return 0;
 	}
 
+	if (efi_get_type(bp) == EFI_TYPE_64)
+		sys_tbl_size = sizeof(efi_system_table_64_t);
+	else
+		sys_tbl_size = sizeof(efi_system_table_32_t);
+
+	kernel_add_identity_map(sys_tbl_pa, sys_tbl_pa + sys_tbl_size, 0);
+
 	return sys_tbl_pa;
 }
 
@@ -92,6 +103,10 @@ static struct efi_setup_data *get_kexec_setup_data(struct boot_params *bp,
 
 	pa_data = bp->hdr.setup_data;
 	while (pa_data) {
+		unsigned long pa_data_end = pa_data + sizeof(struct setup_data)
+					  + sizeof(struct efi_setup_data);
+		kernel_add_identity_map(pa_data, pa_data_end, 0);
+
 		data = (struct setup_data *)pa_data;
 		if (data->type == SETUP_EFI) {
 			esd = (struct efi_setup_data *)(pa_data + sizeof(struct setup_data));
@@ -160,6 +175,8 @@ int efi_get_conf_table(struct boot_params *bp, unsigned long *cfg_tbl_pa,
 		return -EINVAL;
 	}
 
+	kernel_add_identity_map(*cfg_tbl_pa, *cfg_tbl_pa + *cfg_tbl_len, 0);
+
 	return 0;
 }
 
diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index 454757fbdfe5..c0ee116c4fa2 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -688,6 +688,8 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
 	u32 nr_desc;
 	int i;
 
+	kernel_add_identity_map((unsigned long)e, (unsigned long)(e + 1), 0);
+
 	signature = (char *)&e->efi_loader_signature;
 	if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&
 	    strncmp(signature, EFI64_LOADER_SIGNATURE, 4))
@@ -704,6 +706,8 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
 	pmap = (e->efi_memmap | ((__u64)e->efi_memmap_hi << 32));
 #endif
 
+	kernel_add_identity_map(pmap, pmap + e->efi_memmap_size, 0);
+
 	nr_desc = e->efi_memmap_size / e->efi_memdesc_size;
 	for (i = 0; i < nr_desc; i++) {
 		md = efi_early_memdesc_ptr(pmap, e->efi_memdesc_size, i);
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 09/26] x86/boot: Remove mapping from page fault handler
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (7 preceding siblings ...)
  2022-12-15 12:37 ` [PATCH v4 08/26] x86/boot: Map memory explicitly Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 14:49   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 10/26] efi/libstub: Move helper function to related file Evgeniy Baskov
                   ` (17 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

After every implicit mapping is removed, this code is no longer needed.

Remove memory mapping from page fault handler to ensure that there are
no hidden invalid memory accesses.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/ident_map_64.c | 26 ++++++++++---------------
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index fec795a4ce23..ba5108c58a4e 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -386,27 +386,21 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code)
 {
 	unsigned long address = native_read_cr2();
 	unsigned long end;
-	bool ghcb_fault;
+	char *msg;
 
-	ghcb_fault = sev_es_check_ghcb_fault(address);
+	if (sev_es_check_ghcb_fault(address))
+		msg = "Page-fault on GHCB page:";
+	else
+		msg = "Unexpected page-fault:";
 
 	address   &= PMD_MASK;
 	end        = address + PMD_SIZE;
 
 	/*
-	 * Check for unexpected error codes. Unexpected are:
-	 *	- Faults on present pages
-	 *	- User faults
-	 *	- Reserved bits set
-	 */
-	if (error_code & (X86_PF_PROT | X86_PF_USER | X86_PF_RSVD))
-		do_pf_error("Unexpected page-fault:", error_code, address, regs->ip);
-	else if (ghcb_fault)
-		do_pf_error("Page-fault on GHCB page:", error_code, address, regs->ip);
-
-	/*
-	 * Error code is sane - now identity map the 2M region around
-	 * the faulting address.
+	 * Since all memory allocations are made explicit
+	 * now, every page fault at this stage is an
+	 * error and the error handler is there only
+	 * for debug purposes.
 	 */
-	kernel_add_identity_map(address, end, MAP_WRITE);
+	do_pf_error(msg, error_code, address, regs->ip);
 }
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 10/26] efi/libstub: Move helper function to related file
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (8 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 09/26] x86/boot: Remove mapping from page fault handler Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 12:38 ` [PATCH v4 11/26] x86/boot: Make console interface more abstract Evgeniy Baskov
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

efi_adjust_memory_range_protection() can be useful outside x86-stub.c.

Move it to mem.c, where memory related code resides and make it
non-static.

Change its behavior to setup exact attributes and disallow making
memory regions readable and writable simultaneously for supported
configurations.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 drivers/firmware/efi/libstub/efistub.h  |   4 +
 drivers/firmware/efi/libstub/mem.c      | 102 ++++++++++++++++++++++++
 drivers/firmware/efi/libstub/x86-stub.c |  66 ++-------------
 3 files changed, 112 insertions(+), 60 deletions(-)

diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index 5b8f2c411ed8..c55325f829e7 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -975,6 +975,10 @@ efi_status_t efi_relocate_kernel(unsigned long *image_addr,
 				 unsigned long alignment,
 				 unsigned long min_addr);
 
+efi_status_t efi_adjust_memory_range_protection(unsigned long start,
+						unsigned long size,
+						unsigned long attributes);
+
 efi_status_t efi_parse_options(char const *cmdline);
 
 void efi_parse_option_graphics(char *option);
diff --git a/drivers/firmware/efi/libstub/mem.c b/drivers/firmware/efi/libstub/mem.c
index 4f1fa302234d..3e47e5931f04 100644
--- a/drivers/firmware/efi/libstub/mem.c
+++ b/drivers/firmware/efi/libstub/mem.c
@@ -128,3 +128,105 @@ void efi_free(unsigned long size, unsigned long addr)
 	nr_pages = round_up(size, EFI_ALLOC_ALIGN) / EFI_PAGE_SIZE;
 	efi_bs_call(free_pages, addr, nr_pages);
 }
+
+/**
+ * efi_adjust_memory_range_protection() - change memory range protection attributes
+ * @start:	memory range start address
+ * @size:	memory range size
+ *
+ * Actual memory range for which memory attributes are modified is
+ * the smallest ranged with start address and size aligned to EFI_PAGE_SIZE
+ * that includes [start, start + size].
+ *
+ * @return: status code
+ */
+efi_status_t efi_adjust_memory_range_protection(unsigned long start,
+						unsigned long size,
+						unsigned long attributes)
+{
+	efi_status_t status;
+	efi_gcd_memory_space_desc_t desc;
+	efi_physical_addr_t end, next;
+	efi_physical_addr_t rounded_start, rounded_end;
+	efi_physical_addr_t unprotect_start, unprotect_size;
+
+	if (efi_dxe_table == NULL)
+		return EFI_UNSUPPORTED;
+
+	/*
+	 * This function should not be used to modify attributes
+	 * other than writable/executable.
+	 */
+
+	if ((attributes & ~(EFI_MEMORY_RO | EFI_MEMORY_XP)) != 0)
+		return EFI_INVALID_PARAMETER;
+
+	/*
+	 * Disallow simultaniously executable and writable memory
+	 * to inforce W^X policy if direct extraction code is enabled.
+	 */
+
+	if ((attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0) {
+		efi_warn("W^X violation at [%08lx,%08lx]\n",
+			 (unsigned long)rounded_start,
+			 (unsigned long)rounded_end);
+	}
+
+	rounded_start = rounddown(start, EFI_PAGE_SIZE);
+	rounded_end = roundup(start + size, EFI_PAGE_SIZE);
+
+	/*
+	 * Don't modify memory region attributes, they are
+	 * already suitable, to lower the possibility to
+	 * encounter firmware bugs.
+	 */
+
+	for (end = start + size; start < end; start = next) {
+
+		status = efi_dxe_call(get_memory_space_descriptor,
+				      start, &desc);
+
+		if (status != EFI_SUCCESS) {
+			efi_warn("Unable to get memory descriptor at %lx\n",
+				 start);
+			return status;
+		}
+
+		next = desc.base_address + desc.length;
+
+		/*
+		 * Only system memory is suitable for trampoline/kernel image
+		 * placement, so only this type of memory needs its attributes
+		 * to be modified.
+		 */
+
+		if (desc.gcd_memory_type != EfiGcdMemoryTypeSystemMemory) {
+			efi_warn("Attempted to change protection of special memory range\n");
+			return EFI_UNSUPPORTED;
+		}
+
+		if (((desc.attributes ^ attributes) &
+		     (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0)
+			continue;
+
+		desc.attributes &= ~(EFI_MEMORY_RO | EFI_MEMORY_XP);
+		desc.attributes |= attributes;
+
+		unprotect_start = max(rounded_start, desc.base_address);
+		unprotect_size = min(rounded_end, next) - unprotect_start;
+
+		status = efi_dxe_call(set_memory_space_attributes,
+				      unprotect_start, unprotect_size,
+				      desc.attributes);
+
+		if (status != EFI_SUCCESS) {
+			efi_warn("Unable to unprotect memory range [%08lx,%08lx]: %lx\n",
+				 (unsigned long)unprotect_start,
+				 (unsigned long)(unprotect_start + unprotect_size),
+				 status);
+			return status;
+		}
+	}
+
+	return EFI_SUCCESS;
+}
diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index a0bfd31358ba..7fb1eff88a18 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -212,61 +212,6 @@ static void retrieve_apple_device_properties(struct boot_params *boot_params)
 	}
 }
 
-static void
-adjust_memory_range_protection(unsigned long start, unsigned long size)
-{
-	efi_status_t status;
-	efi_gcd_memory_space_desc_t desc;
-	unsigned long end, next;
-	unsigned long rounded_start, rounded_end;
-	unsigned long unprotect_start, unprotect_size;
-
-	if (efi_dxe_table == NULL)
-		return;
-
-	rounded_start = rounddown(start, EFI_PAGE_SIZE);
-	rounded_end = roundup(start + size, EFI_PAGE_SIZE);
-
-	/*
-	 * Don't modify memory region attributes, they are
-	 * already suitable, to lower the possibility to
-	 * encounter firmware bugs.
-	 */
-
-	for (end = start + size; start < end; start = next) {
-
-		status = efi_dxe_call(get_memory_space_descriptor, start, &desc);
-
-		if (status != EFI_SUCCESS)
-			return;
-
-		next = desc.base_address + desc.length;
-
-		/*
-		 * Only system memory is suitable for trampoline/kernel image placement,
-		 * so only this type of memory needs its attributes to be modified.
-		 */
-
-		if (desc.gcd_memory_type != EfiGcdMemoryTypeSystemMemory ||
-		    (desc.attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0)
-			continue;
-
-		unprotect_start = max(rounded_start, (unsigned long)desc.base_address);
-		unprotect_size = min(rounded_end, next) - unprotect_start;
-
-		status = efi_dxe_call(set_memory_space_attributes,
-				      unprotect_start, unprotect_size,
-				      EFI_MEMORY_WB);
-
-		if (status != EFI_SUCCESS) {
-			efi_warn("Unable to unprotect memory range [%08lx,%08lx]: %lx\n",
-				 unprotect_start,
-				 unprotect_start + unprotect_size,
-				 status);
-		}
-	}
-}
-
 /*
  * Trampoline takes 2 pages and can be loaded in first megabyte of memory
  * with its end placed between 128k and 640k where BIOS might start.
@@ -290,12 +235,12 @@ setup_memory_protection(unsigned long image_base, unsigned long image_size)
 	 * and relocated kernel image.
 	 */
 
-	adjust_memory_range_protection(TRAMPOLINE_PLACEMENT_BASE,
-				       TRAMPOLINE_PLACEMENT_SIZE);
+	efi_adjust_memory_range_protection(TRAMPOLINE_PLACEMENT_BASE,
+					   TRAMPOLINE_PLACEMENT_SIZE, 0);
 
 #ifdef CONFIG_64BIT
 	if (image_base != (unsigned long)startup_32)
-		adjust_memory_range_protection(image_base, image_size);
+		efi_adjust_memory_range_protection(image_base, image_size, 0);
 #else
 	/*
 	 * Clear protection flags on a whole range of possible
@@ -305,8 +250,9 @@ setup_memory_protection(unsigned long image_base, unsigned long image_size)
 	 * need to remove possible protection on relocated image
 	 * itself disregarding further relocations.
 	 */
-	adjust_memory_range_protection(LOAD_PHYSICAL_ADDR,
-				       KERNEL_IMAGE_SIZE - LOAD_PHYSICAL_ADDR);
+	efi_adjust_memory_range_protection(LOAD_PHYSICAL_ADDR,
+					   KERNEL_IMAGE_SIZE - LOAD_PHYSICAL_ADDR,
+					   0);
 #endif
 }
 
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 11/26] x86/boot: Make console interface more abstract
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (9 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 10/26] efi/libstub: Move helper function to related file Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 12:38 ` [PATCH v4 12/26] x86/boot: Make kernel_add_identity_map() a pointer Evgeniy Baskov
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

To be able to extract kernel from EFI, console output functions
need to be replaceable by alternative implementations.

Make all of those functions pointers.
Move serial console code to separate file.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/Makefile |   2 +-
 arch/x86/boot/compressed/misc.c   | 109 +------------------------
 arch/x86/boot/compressed/misc.h   |   9 ++-
 arch/x86/boot/compressed/putstr.c | 130 ++++++++++++++++++++++++++++++
 4 files changed, 139 insertions(+), 111 deletions(-)
 create mode 100644 arch/x86/boot/compressed/putstr.c

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 4dcab38f5a38..4b1524446875 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -93,7 +93,7 @@ $(obj)/misc.o: $(obj)/../voffset.h
 
 vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/kernel_info.o $(obj)/head_$(BITS).o \
 	$(obj)/misc.o $(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
-	$(obj)/piggy.o $(obj)/cpuflags.o
+	$(obj)/piggy.o $(obj)/cpuflags.o $(obj)/putstr.o
 
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 0c7ec290044d..aa4a22bc9cf9 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -53,13 +53,6 @@ struct port_io_ops pio_ops;
 memptr free_mem_ptr;
 memptr free_mem_end_ptr;
 
-static char *vidmem;
-static int vidport;
-
-/* These might be accessed before .bss is cleared, so use .data instead. */
-static int lines __section(".data");
-static int cols __section(".data");
-
 #ifdef CONFIG_KERNEL_GZIP
 #include "../../../../lib/decompress_inflate.c"
 #endif
@@ -92,95 +85,6 @@ static int cols __section(".data");
  * ../header.S.
  */
 
-static void scroll(void)
-{
-	int i;
-
-	memmove(vidmem, vidmem + cols * 2, (lines - 1) * cols * 2);
-	for (i = (lines - 1) * cols * 2; i < lines * cols * 2; i += 2)
-		vidmem[i] = ' ';
-}
-
-#define XMTRDY          0x20
-
-#define TXR             0       /*  Transmit register (WRITE) */
-#define LSR             5       /*  Line Status               */
-static void serial_putchar(int ch)
-{
-	unsigned timeout = 0xffff;
-
-	while ((inb(early_serial_base + LSR) & XMTRDY) == 0 && --timeout)
-		cpu_relax();
-
-	outb(ch, early_serial_base + TXR);
-}
-
-void __putstr(const char *s)
-{
-	int x, y, pos;
-	char c;
-
-	if (early_serial_base) {
-		const char *str = s;
-		while (*str) {
-			if (*str == '\n')
-				serial_putchar('\r');
-			serial_putchar(*str++);
-		}
-	}
-
-	if (lines == 0 || cols == 0)
-		return;
-
-	x = boot_params->screen_info.orig_x;
-	y = boot_params->screen_info.orig_y;
-
-	while ((c = *s++) != '\0') {
-		if (c == '\n') {
-			x = 0;
-			if (++y >= lines) {
-				scroll();
-				y--;
-			}
-		} else {
-			vidmem[(x + cols * y) * 2] = c;
-			if (++x >= cols) {
-				x = 0;
-				if (++y >= lines) {
-					scroll();
-					y--;
-				}
-			}
-		}
-	}
-
-	boot_params->screen_info.orig_x = x;
-	boot_params->screen_info.orig_y = y;
-
-	pos = (x + cols * y) * 2;	/* Update cursor position */
-	outb(14, vidport);
-	outb(0xff & (pos >> 9), vidport+1);
-	outb(15, vidport);
-	outb(0xff & (pos >> 1), vidport+1);
-}
-
-void __puthex(unsigned long value)
-{
-	char alpha[2] = "0";
-	int bits;
-
-	for (bits = sizeof(value) * 8 - 4; bits >= 0; bits -= 4) {
-		unsigned long digit = (value >> bits) & 0xf;
-
-		if (digit < 0xA)
-			alpha[0] = '0' + digit;
-		else
-			alpha[0] = 'a' + (digit - 0xA);
-
-		__putstr(alpha);
-	}
-}
-
 #ifdef CONFIG_X86_NEED_RELOCS
 static void handle_relocations(void *output, unsigned long output_len,
 			       unsigned long virt_addr)
@@ -406,17 +310,6 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 
 	sanitize_boot_params(boot_params);
 
-	if (boot_params->screen_info.orig_video_mode == 7) {
-		vidmem = (char *) 0xb0000;
-		vidport = 0x3b4;
-	} else {
-		vidmem = (char *) 0xb8000;
-		vidport = 0x3d4;
-	}
-
-	lines = boot_params->screen_info.orig_video_lines;
-	cols = boot_params->screen_info.orig_video_cols;
-
 	init_default_io_ops();
 
 	/*
@@ -427,7 +320,7 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	 */
 	early_tdx_detect();
 
-	console_init();
+	init_bare_console();
 
 	/*
 	 * Save RSDP address for later use. Have this after console_init()
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 033db9b536e6..38d31bec062d 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -57,8 +57,8 @@ extern memptr free_mem_end_ptr;
 void *malloc(int size);
 void free(void *where);
 extern struct boot_params *boot_params;
-void __putstr(const char *s);
-void __puthex(unsigned long value);
+extern void (*__putstr)(const char *s);
+extern void (*__puthex)(unsigned long value);
 #define error_putstr(__x)  __putstr(__x)
 #define error_puthex(__x)  __puthex(__x)
 
@@ -128,6 +128,11 @@ static inline void console_init(void)
 { }
 #endif
 
+/* putstr.c */
+void init_bare_console(void);
+void init_console_func(void (*putstr_)(const char *),
+		       void (*puthex_)(unsigned long));
+
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 void sev_enable(struct boot_params *bp);
 void sev_es_shutdown_ghcb(void);
diff --git a/arch/x86/boot/compressed/putstr.c b/arch/x86/boot/compressed/putstr.c
new file mode 100644
index 000000000000..44a4c3dacec5
--- /dev/null
+++ b/arch/x86/boot/compressed/putstr.c
@@ -0,0 +1,130 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "misc.h"
+
+/* These might be accessed before .bss is cleared, so use .data instead. */
+static char *vidmem __section(".data");
+static int vidport __section(".data");
+static int lines __section(".data");
+static int cols __section(".data");
+
+void (*__putstr)(const char *s);
+void (*__puthex)(unsigned long value);
+
+static void putstr(const char *s);
+static void puthex(unsigned long value);
+
+void init_console_func(void (*putstr_)(const char *),
+		       void (*puthex_)(unsigned long))
+{
+	__putstr = putstr_;
+	__puthex = puthex_;
+}
+
+void init_bare_console(void)
+{
+	init_console_func(putstr, puthex);
+
+	if (boot_params->screen_info.orig_video_mode == 7) {
+		vidmem = (char *) 0xb0000;
+		vidport = 0x3b4;
+	} else {
+		vidmem = (char *) 0xb8000;
+		vidport = 0x3d4;
+	}
+
+	lines = boot_params->screen_info.orig_video_lines;
+	cols = boot_params->screen_info.orig_video_cols;
+
+	console_init();
+}
+
+static void scroll(void)
+{
+	int i;
+
+	memmove(vidmem, vidmem + cols * 2, (lines - 1) * cols * 2);
+	for (i = (lines - 1) * cols * 2; i < lines * cols * 2; i += 2)
+		vidmem[i] = ' ';
+}
+
+#define XMTRDY          0x20
+
+#define TXR             0       /*  Transmit register (WRITE) */
+#define LSR             5       /*  Line Status               */
+
+static void serial_putchar(int ch)
+{
+	unsigned int timeout = 0xffff;
+
+	while ((inb(early_serial_base + LSR) & XMTRDY) == 0 && --timeout)
+		cpu_relax();
+
+	outb(ch, early_serial_base + TXR);
+}
+
+static void putstr(const char *s)
+{
+	int x, y, pos;
+	char c;
+
+	if (early_serial_base) {
+		const char *str = s;
+
+		while (*str) {
+			if (*str == '\n')
+				serial_putchar('\r');
+			serial_putchar(*str++);
+		}
+	}
+
+	if (lines == 0 || cols == 0)
+		return;
+
+	x = boot_params->screen_info.orig_x;
+	y = boot_params->screen_info.orig_y;
+
+	while ((c = *s++) != '\0') {
+		if (c == '\n') {
+			x = 0;
+			if (++y >= lines) {
+				scroll();
+				y--;
+			}
+		} else {
+			vidmem[(x + cols * y) * 2] = c;
+			if (++x >= cols) {
+				x = 0;
+				if (++y >= lines) {
+					scroll();
+					y--;
+				}
+			}
+		}
+	}
+
+	boot_params->screen_info.orig_x = x;
+	boot_params->screen_info.orig_y = y;
+
+	pos = (x + cols * y) * 2;	/* Update cursor position */
+	outb(14, vidport);
+	outb(0xff & (pos >> 9), vidport+1);
+	outb(15, vidport);
+	outb(0xff & (pos >> 1), vidport+1);
+}
+
+static void puthex(unsigned long value)
+{
+	char alpha[2] = "0";
+	int bits;
+
+	for (bits = sizeof(value) * 8 - 4; bits >= 0; bits -= 4) {
+		unsigned long digit = (value >> bits) & 0xf;
+
+		if (digit < 0xA)
+			alpha[0] = '0' + digit;
+		else
+			alpha[0] = 'a' + (digit - 0xA);
+
+		putstr(alpha);
+	}
+}
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 12/26] x86/boot: Make kernel_add_identity_map() a pointer
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (10 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 11/26] x86/boot: Make console interface more abstract Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 14:52   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 13/26] x86/boot: Split trampoline and pt init code Evgeniy Baskov
                   ` (14 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Convert kernel_add_identity_map() into a function pointer to be able
to provide alternative implementations of this function. Required
to enable calling the code using this function from EFI environment.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/ident_map_64.c |  7 ++++---
 arch/x86/boot/compressed/misc.c         | 24 ++++++++++++++++++++++++
 arch/x86/boot/compressed/misc.h         | 15 +++------------
 3 files changed, 31 insertions(+), 15 deletions(-)

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index ba5108c58a4e..1aee524d3c2b 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -92,9 +92,9 @@ bool has_nx; /* set in head_64.S */
 /*
  * Adds the specified range to the identity mappings.
  */
-unsigned long kernel_add_identity_map(unsigned long start,
-				      unsigned long end,
-				      unsigned int flags)
+unsigned long kernel_add_identity_map_(unsigned long start,
+				       unsigned long end,
+				       unsigned int flags)
 {
 	int ret;
 
@@ -142,6 +142,7 @@ void initialize_identity_maps(void *rmode)
 	struct setup_data *sd;
 
 	boot_params = rmode;
+	kernel_add_identity_map = kernel_add_identity_map_;
 
 	/* Exclude the encryption mask from __PHYSICAL_MASK */
 	physical_mask &= ~sme_me_mask;
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index aa4a22bc9cf9..c9c235d65d16 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -275,6 +275,22 @@ static void parse_elf(void *output, unsigned long output_len,
 	free(phdrs);
 }
 
+/*
+ * This points to actual implementation of mapping function
+ * for current environment: either EFI API wrapper,
+ * own implementation or dummy implementation below.
+ */
+unsigned long (*kernel_add_identity_map)(unsigned long start,
+					 unsigned long end,
+					 unsigned int flags);
+
+static inline unsigned long kernel_add_identity_map_dummy(unsigned long start,
+							  unsigned long end,
+							  unsigned int flags)
+{
+	return start;
+}
+
 /*
  * The compressed kernel image (ZO), has been moved so that its position
  * is against the end of the buffer used to hold the uncompressed kernel
@@ -312,6 +328,14 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 
 	init_default_io_ops();
 
+	/*
+	 * On 64-bit this pointer is set during page table uninitialization,
+	 * but on 32-bit it remains uninitialized, since paging is disabled.
+	 */
+	if (IS_ENABLED(CONFIG_X86_32))
+		kernel_add_identity_map = kernel_add_identity_map_dummy;
+
+
 	/*
 	 * Detect TDX guest environment.
 	 *
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 38d31bec062d..0076b2845b4b 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -180,18 +180,9 @@ static inline int count_immovable_mem_regions(void) { return 0; }
 #ifdef CONFIG_X86_5LEVEL
 extern unsigned int __pgtable_l5_enabled, pgdir_shift, ptrs_per_p4d;
 #endif
-#ifdef CONFIG_X86_64
-extern unsigned long kernel_add_identity_map(unsigned long start,
-					     unsigned long end,
-					     unsigned int flags);
-#else
-static inline unsigned long kernel_add_identity_map(unsigned long start,
-						    unsigned long end,
-						    unsigned int flags)
-{
-	return start;
-}
-#endif
+extern unsigned long (*kernel_add_identity_map)(unsigned long start,
+						unsigned long end,
+						unsigned int flags);
 /* Used by PAGE_KERN* macros: */
 extern pteval_t __default_kernel_pte_mask;
 
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 13/26] x86/boot: Split trampoline and pt init code
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (11 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 12/26] x86/boot: Make kernel_add_identity_map() a pointer Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 14:56   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 14/26] x86/boot: Add EFI kernel extraction interface Evgeniy Baskov
                   ` (13 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

When allocating trampoline from libstub trampoline allocation is
performed separately, so it needs to be skipped.

Split trampoline initialization and allocation code into two
functions to make them invokable separately.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/pgtable_64.c | 73 +++++++++++++++++----------
 1 file changed, 46 insertions(+), 27 deletions(-)

diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
index c7cf5a1059a8..1f7169248612 100644
--- a/arch/x86/boot/compressed/pgtable_64.c
+++ b/arch/x86/boot/compressed/pgtable_64.c
@@ -106,12 +106,8 @@ static unsigned long find_trampoline_placement(void)
 	return bios_start - TRAMPOLINE_32BIT_SIZE;
 }
 
-struct paging_config paging_prepare(void *rmode)
+bool trampoline_pgtable_init(struct boot_params *boot_params)
 {
-	struct paging_config paging_config = {};
-
-	/* Initialize boot_params. Required for cmdline_find_option_bool(). */
-	boot_params = rmode;
 
 	/*
 	 * Check if LA57 is desired and supported.
@@ -125,26 +121,10 @@ struct paging_config paging_prepare(void *rmode)
 	 *
 	 * That's substitute for boot_cpu_has() in early boot code.
 	 */
-	if (IS_ENABLED(CONFIG_X86_5LEVEL) &&
-			!cmdline_find_option_bool("no5lvl") &&
-			native_cpuid_eax(0) >= 7 &&
-			(native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31)))) {
-		paging_config.l5_required = 1;
-	}
-
-	paging_config.trampoline_start = find_trampoline_placement();
-
-	trampoline_32bit = (unsigned long *)paging_config.trampoline_start;
-
-	/* Preserve trampoline memory */
-	memcpy(trampoline_save, trampoline_32bit, TRAMPOLINE_32BIT_SIZE);
-
-	/* Clear trampoline memory first */
-	memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
-
-	/* Copy trampoline code in place */
-	memcpy(trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET / sizeof(unsigned long),
-			&trampoline_32bit_src, TRAMPOLINE_32BIT_CODE_SIZE);
+	bool l5_required = IS_ENABLED(CONFIG_X86_5LEVEL) &&
+			   !cmdline_find_option_bool("no5lvl") &&
+			   native_cpuid_eax(0) >= 7 &&
+			   (native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31)));
 
 	/*
 	 * The code below prepares page table in trampoline memory.
@@ -160,10 +140,10 @@ struct paging_config paging_prepare(void *rmode)
 	 * We are not going to use the page table in trampoline memory if we
 	 * are already in the desired paging mode.
 	 */
-	if (paging_config.l5_required == !!(native_read_cr4() & X86_CR4_LA57))
+	if (l5_required == !!(native_read_cr4() & X86_CR4_LA57))
 		goto out;
 
-	if (paging_config.l5_required) {
+	if (l5_required) {
 		/*
 		 * For 4- to 5-level paging transition, set up current CR3 as
 		 * the first and the only entry in a new top-level page table.
@@ -185,6 +165,45 @@ struct paging_config paging_prepare(void *rmode)
 		       (void *)src, PAGE_SIZE);
 	}
 
+out:
+	return l5_required;
+}
+
+struct paging_config paging_prepare(void *rmode)
+{
+	struct paging_config paging_config = {};
+	bool early_trampoline_alloc = 0;
+
+	/* Initialize boot_params. Required for cmdline_find_option_bool(). */
+	boot_params = rmode;
+
+	/*
+	 * We only need to find trampoline placement, if we have
+	 * not already done it from libstub.
+	 */
+
+	paging_config.trampoline_start = find_trampoline_placement();
+	trampoline_32bit = (unsigned long *)paging_config.trampoline_start;
+	early_trampoline_alloc = 0;
+
+	/*
+	 * Preserve trampoline memory.
+	 * When trampoline is located in memory
+	 * owned by us, i.e. allocated in EFISTUB,
+	 * we don't care about previous contents
+	 * of this memory so copying can also be skipped.
+	 */
+	memcpy(trampoline_save, trampoline_32bit, TRAMPOLINE_32BIT_SIZE);
+
+	/* Clear trampoline memory first */
+	memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
+
+	/* Copy trampoline code in place */
+	memcpy(trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET / sizeof(unsigned long),
+			&trampoline_32bit_src, TRAMPOLINE_32BIT_CODE_SIZE);
+
+	paging_config.l5_required = trampoline_pgtable_init(boot_params);
+
 out:
 	return paging_config;
 }
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 14/26] x86/boot: Add EFI kernel extraction interface
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (12 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 13/26] x86/boot: Split trampoline and pt init code Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 12:38 ` [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub Evgeniy Baskov
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

To enable extraction of kernel image from EFI stub code directly
extraction code needs to have separate interface that avoid part
of low level initialization logic, i.e. serial port setup.

Add kernel extraction function callable from libstub as a part
of preparation for extracting the kernel directly from EFI environment.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/head_32.S    |   3 +-
 arch/x86/boot/compressed/head_64.S    |   2 +-
 arch/x86/boot/compressed/misc.c       | 100 +++++++++++++++++---------
 arch/x86/boot/compressed/misc.h       |   1 +
 arch/x86/include/asm/shared/extract.h |  26 +++++++
 5 files changed, 96 insertions(+), 36 deletions(-)
 create mode 100644 arch/x86/include/asm/shared/extract.h

diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 6589ddd4cfaf..ead6007df1e5 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -213,8 +213,7 @@ SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end)
  */
 	.bss
 	.balign 4
-boot_heap:
-	.fill BOOT_HEAP_SIZE, 1, 0
+SYM_DATA(boot_heap,	.fill BOOT_HEAP_SIZE, 1, 0)
 boot_stack:
 	.fill BOOT_STACK_SIZE, 1, 0
 boot_stack_end:
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 8b9c4fe17126..2dd8be0583d2 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -747,7 +747,7 @@ SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)
  */
 	.bss
 	.balign 4
-SYM_DATA_LOCAL(boot_heap,	.fill BOOT_HEAP_SIZE, 1, 0)
+SYM_DATA(boot_heap,	.fill BOOT_HEAP_SIZE, 1, 0)
 
 SYM_DATA_START_LOCAL(boot_stack)
 	.fill BOOT_STACK_SIZE, 1, 0
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index c9c235d65d16..ebf229c38b3b 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -308,11 +308,11 @@ static inline unsigned long kernel_add_identity_map_dummy(unsigned long start,
  *             |-------uncompressed kernel image---------|
  *
  */
-asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
-				  unsigned char *input_data,
-				  unsigned long input_len,
-				  unsigned char *output,
-				  unsigned long output_len)
+static void *do_extract_kernel(void *rmode,
+			       unsigned char *input_data,
+			       unsigned long input_len,
+			       unsigned char *output,
+			       unsigned long output_len)
 {
 	const unsigned long kernel_total_size = VO__end - VO__text;
 	unsigned long virt_addr = LOAD_PHYSICAL_ADDR;
@@ -326,26 +326,6 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 
 	sanitize_boot_params(boot_params);
 
-	init_default_io_ops();
-
-	/*
-	 * On 64-bit this pointer is set during page table uninitialization,
-	 * but on 32-bit it remains uninitialized, since paging is disabled.
-	 */
-	if (IS_ENABLED(CONFIG_X86_32))
-		kernel_add_identity_map = kernel_add_identity_map_dummy;
-
-
-	/*
-	 * Detect TDX guest environment.
-	 *
-	 * It has to be done before console_init() in order to use
-	 * paravirtualized port I/O operations if needed.
-	 */
-	early_tdx_detect();
-
-	init_bare_console();
-
 	/*
 	 * Save RSDP address for later use. Have this after console_init()
 	 * so that early debugging output from the RSDP parsing code can be
@@ -353,11 +333,6 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	 */
 	boot_params->acpi_rsdp_addr = get_rsdp_addr();
 
-	debug_putstr("early console in extract_kernel\n");
-
-	free_mem_ptr     = heap;	/* Heap */
-	free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
-
 	/*
 	 * The memory hole needed for the kernel is the larger of either
 	 * the entire decompressed kernel plus relocation table, or the
@@ -411,12 +386,12 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	if (virt_addr & (MIN_KERNEL_ALIGN - 1))
 		error("Destination virtual address inappropriately aligned");
 #ifdef CONFIG_X86_64
-	if (heap > 0x3fffffffffffUL)
+	if (phys_addr > 0x3fffffffffffUL)
 		error("Destination address too large");
 	if (virt_addr + max(output_len, kernel_total_size) > KERNEL_IMAGE_SIZE)
 		error("Destination virtual address is beyond the kernel mapping area");
 #else
-	if (heap > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff))
+	if (phys_addr > ((-__PAGE_OFFSET-(128<<20)-1) & 0x7fffffff))
 		error("Destination address too large");
 #endif
 #ifndef CONFIG_RELOCATABLE
@@ -430,12 +405,71 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
 	parse_elf(output, output_len, virt_addr);
 	debug_putstr("done.\nBooting the kernel.\n");
 
+	return output;
+}
+
+asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
+				  unsigned char *input_data,
+				  unsigned long input_len,
+				  unsigned char *output,
+				  unsigned long output_len)
+{
+	void *entry;
+
+	init_default_io_ops();
+
+	/*
+	 * On 64-bit this pointer is set during page table uninitialization,
+	 * but on 32-bit it remains uninitialized, since paging is disabled.
+	 */
+	if (IS_ENABLED(CONFIG_X86_32))
+		kernel_add_identity_map = kernel_add_identity_map_dummy;
+
+	/*
+	 * Detect TDX guest environment.
+	 *
+	 * It has to be done before console_init() in order to use
+	 * paravirtualized port I/O operations if needed.
+	 */
+	early_tdx_detect();
+
+	init_bare_console();
+
+	debug_putstr("early console in extract_kernel\n");
+
+	free_mem_ptr     = heap;	/* Heap */
+	free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
+
+	entry = do_extract_kernel(rmode, input_data,
+				  input_len, output, output_len);
+
 	/* Disable exception handling before booting the kernel */
 	cleanup_exception_handling();
 
-	return output;
+	return entry;
 }
 
+void *efi_extract_kernel(struct boot_params *rmode,
+			 struct efi_extract_callbacks *cb,
+			 unsigned char *input_data,
+			 unsigned long input_len,
+			 unsigned long output_len)
+{
+	extern char boot_heap[BOOT_HEAP_SIZE];
+
+	free_mem_ptr     = (unsigned long)boot_heap;	/* Heap */
+	free_mem_end_ptr = (unsigned long)boot_heap + BOOT_HEAP_SIZE;
+
+	init_console_func(cb->putstr, cb->puthex);
+	kernel_add_identity_map = cb->map_range;
+
+	return do_extract_kernel(rmode, input_data,
+				 input_len, (void *)LOAD_PHYSICAL_ADDR, output_len);
+}
+
+
+
+
 void fortify_panic(const char *name)
 {
 	error("detected buffer overflow");
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 0076b2845b4b..379c4a3ca7dd 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -26,6 +26,7 @@
 #include <asm/boot.h>
 #include <asm/bootparam.h>
 #include <asm/desc_defs.h>
+#include <asm/shared/extract.h>
 
 #include "tdx.h"
 
diff --git a/arch/x86/include/asm/shared/extract.h b/arch/x86/include/asm/shared/extract.h
new file mode 100644
index 000000000000..46bf56348a86
--- /dev/null
+++ b/arch/x86/include/asm/shared/extract.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef ASM_SHARED_EXTRACT_H
+#define ASM_SHARED_EXTRACT_H
+
+#include <asm/bootparam.h>
+
+#define MAP_WRITE	0x02 /* Writable memory */
+#define MAP_EXEC	0x04 /* Executable memory */
+#define MAP_ALLOC	0x10 /* Range needs to be allocated */
+#define MAP_PROTECT	0x20 /* Set exact memory attributes for memory range */
+
+struct efi_extract_callbacks {
+	void (*putstr)(const char *msg);
+	void (*puthex)(unsigned long x);
+	unsigned long (*map_range)(unsigned long start,
+				   unsigned long end,
+				   unsigned int flags);
+};
+
+void *efi_extract_kernel(struct boot_params *rmode,
+			 struct efi_extract_callbacks *cb,
+			 unsigned char *input_data,
+			 unsigned long input_len,
+			 unsigned long output_len);
+
+#endif /* ASM_SHARED_EXTRACT_H */
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (13 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 14/26] x86/boot: Add EFI kernel extraction interface Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-09 16:00   ` Ard Biesheuvel
                     ` (2 more replies)
  2022-12-15 12:38 ` [PATCH v4 16/26] x86/boot: Reduce lower limit of physical KASLR Evgeniy Baskov
                   ` (11 subsequent siblings)
  26 siblings, 3 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Doing it that way allows setting up stricter memory attributes,
simplifies boot code path and removes potential relocation
of kernel image.

Wire up required interfaces and minimally initialize zero page
fields needed for it to function correctly.

Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/head_32.S            |  50 ++++-
 arch/x86/boot/compressed/head_64.S            |  58 ++++-
 drivers/firmware/efi/Kconfig                  |   2 +
 drivers/firmware/efi/libstub/Makefile         |   2 +-
 .../firmware/efi/libstub/x86-extract-direct.c | 208 ++++++++++++++++++
 drivers/firmware/efi/libstub/x86-stub.c       | 119 +---------
 drivers/firmware/efi/libstub/x86-stub.h       |  14 ++
 7 files changed, 338 insertions(+), 115 deletions(-)
 create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
 create mode 100644 drivers/firmware/efi/libstub/x86-stub.h

diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index ead6007df1e5..0be75e5072ae 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -152,11 +152,57 @@ SYM_FUNC_END(startup_32)
 
 #ifdef CONFIG_EFI_STUB
 SYM_FUNC_START(efi32_stub_entry)
+/*
+ * Calculate the delta between where we were compiled to run
+ * at and where we were actually loaded at.  This can only be done
+ * with a short local call on x86.  Nothing  else will tell us what
+ * address we are running at.  The reserved chunk of the real-mode
+ * data at 0x1e4 (defined as a scratch field) are used as the stack
+ * for this calculation. Only 4 bytes are needed.
+ */
+	call	1f
+1:	popl	%ebx
+	addl	$_GLOBAL_OFFSET_TABLE_+(.-1b), %ebx
+
+	/* Clear BSS */
+	xorl	%eax, %eax
+	leal	_bss@GOTOFF(%ebx), %edi
+	leal	_ebss@GOTOFF(%ebx), %ecx
+	subl	%edi, %ecx
+	shrl	$2, %ecx
+	rep	stosl
+
 	add	$0x4, %esp
 	movl	8(%esp), %esi	/* save boot_params pointer */
+	movl	%edx, %edi	/* save GOT address */
 	call	efi_main
-	/* efi_main returns the possibly relocated address of startup_32 */
-	jmp	*%eax
+	movl	%eax, %ecx
+
+	/*
+	 * efi_main returns the possibly
+	 * relocated address of extracted kernel entry point.
+	 */
+
+	cli
+
+	/* Load new GDT */
+	leal	gdt@GOTOFF(%ebx), %eax
+	movl	%eax, 2(%eax)
+	lgdt	(%eax)
+
+	/* Load segment registers with our descriptors */
+	movl	$__BOOT_DS, %eax
+	movl	%eax, %ds
+	movl	%eax, %es
+	movl	%eax, %fs
+	movl	%eax, %gs
+	movl	%eax, %ss
+
+	/* Zero EFLAGS */
+	pushl	$0
+	popfl
+
+	jmp	*%ecx
 SYM_FUNC_END(efi32_stub_entry)
 SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
 #endif
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 2dd8be0583d2..7cfef7bd0424 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -529,12 +529,64 @@ SYM_CODE_END(startup_64)
 	.org 0x390
 #endif
 SYM_FUNC_START(efi64_stub_entry)
+	/* Preserve first parameter */
+	movq	%rdi, %r10
+
+	/* Clear BSS */
+	xorl	%eax, %eax
+	leaq	_bss(%rip), %rdi
+	leaq	_ebss(%rip), %rcx
+	subq	%rdi, %rcx
+	shrq	$3, %rcx
+	rep	stosq
+
 	and	$~0xf, %rsp			/* realign the stack */
 	movq	%rdx, %rbx			/* save boot_params pointer */
+	movq	%r10, %rdi
 	call	efi_main
-	movq	%rbx,%rsi
-	leaq	rva(startup_64)(%rax), %rax
-	jmp	*%rax
+
+	cld
+	cli
+
+	movq	%rbx, %rdi /* boot_params */
+	movq	%rax, %rsi /* decompressed kernel address */
+
+	/* Make sure we have GDT with 32-bit code segment */
+	leaq	gdt64(%rip), %rax
+	addq	%rax, 2(%rax)
+	lgdt	(%rax)
+
+	/* Setup data segments. */
+	xorl	%eax, %eax
+	movl	%eax, %ds
+	movl	%eax, %es
+	movl	%eax, %ss
+	movl	%eax, %fs
+	movl	%eax, %gs
+
+	pushq	%rsi
+	pushq	%rdi
+
+	call	load_stage1_idt
+	call	enable_nx_if_supported
+
+	call	trampoline_pgtable_init
+	movq	%rax, %rdx
+
+
+	/* Swap %rsi and %rsi */
+	popq	%rsi
+	popq	%rdi
+
+	/* Save the trampoline address in RCX */
+	movq	trampoline_32bit(%rip), %rcx
+
+	/* Switch to compatibility mode (CS.L = 0 CS.D = 1) via far return */
+	pushq	$__KERNEL32_CS
+	leaq	TRAMPOLINE_32BIT_CODE_OFFSET(%rcx), %rax
+	pushq	%rax
+	lretq
+
 SYM_FUNC_END(efi64_stub_entry)
 SYM_FUNC_ALIAS(efi_stub_entry, efi64_stub_entry)
 #endif
diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
index 043ca31c114e..f50c2a84a754 100644
--- a/drivers/firmware/efi/Kconfig
+++ b/drivers/firmware/efi/Kconfig
@@ -58,6 +58,8 @@ config EFI_DXE_MEM_ATTRIBUTES
 	  Use DXE services to check and alter memory protection
 	  attributes during boot via EFISTUB to ensure that memory
 	  ranges used by the kernel are writable and executable.
+	  This option also enables stricter memory attributes
+	  on compressed kernel PE image.
 
 config EFI_PARAMS_FROM_FDT
 	bool
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index be8b8c6e8b40..99b81c95344c 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -88,7 +88,7 @@ lib-$(CONFIG_EFI_GENERIC_STUB)	+= efi-stub.o string.o intrinsics.o systable.o \
 
 lib-$(CONFIG_ARM)		+= arm32-stub.o
 lib-$(CONFIG_ARM64)		+= arm64.o arm64-stub.o arm64-entry.o smbios.o
-lib-$(CONFIG_X86)		+= x86-stub.o
+lib-$(CONFIG_X86)		+= x86-stub.o x86-extract-direct.o
 lib-$(CONFIG_RISCV)		+= riscv.o riscv-stub.o
 lib-$(CONFIG_LOONGARCH)		+= loongarch.o loongarch-stub.o
 
diff --git a/drivers/firmware/efi/libstub/x86-extract-direct.c b/drivers/firmware/efi/libstub/x86-extract-direct.c
new file mode 100644
index 000000000000..4ecbc4a9b3ed
--- /dev/null
+++ b/drivers/firmware/efi/libstub/x86-extract-direct.c
@@ -0,0 +1,208 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/acpi.h>
+#include <linux/efi.h>
+#include <linux/elf.h>
+#include <linux/stddef.h>
+
+#include <asm/efi.h>
+#include <asm/e820/types.h>
+#include <asm/desc.h>
+#include <asm/boot.h>
+#include <asm/bootparam_utils.h>
+#include <asm/shared/extract.h>
+#include <asm/shared/pgtable.h>
+
+#include "efistub.h"
+#include "x86-stub.h"
+
+static efi_handle_t image_handle;
+
+static void do_puthex(unsigned long value)
+{
+	efi_printk("%08lx", value);
+}
+
+static void do_putstr(const char *msg)
+{
+	efi_printk("%s", msg);
+}
+
+static unsigned long do_map_range(unsigned long start,
+				  unsigned long end,
+				  unsigned int flags)
+{
+	efi_status_t status;
+
+	unsigned long size = end - start;
+
+	if (flags & MAP_ALLOC) {
+		unsigned long addr;
+
+		status = efi_low_alloc_above(size, CONFIG_PHYSICAL_ALIGN,
+					     &addr, start);
+		if (status != EFI_SUCCESS) {
+			efi_err("Unable to allocate memory for uncompressed kernel");
+			efi_exit(image_handle, EFI_OUT_OF_RESOURCES);
+		}
+
+		if (start != addr) {
+			efi_debug("Unable to allocate at given address"
+				  " (desired=0x%lx, actual=0x%lx)",
+				  (unsigned long)start, addr);
+			start = addr;
+		}
+	}
+
+	if ((flags & (MAP_PROTECT | MAP_ALLOC)) &&
+	    IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
+		unsigned long attr = 0;
+
+		if (!(flags & MAP_EXEC))
+			attr |= EFI_MEMORY_XP;
+
+		if (!(flags & MAP_WRITE))
+			attr |= EFI_MEMORY_RO;
+
+		status = efi_adjust_memory_range_protection(start, size, attr);
+		if (status != EFI_SUCCESS)
+			efi_err("Unable to protect memory range");
+	}
+
+	return start;
+}
+
+/*
+ * Trampoline takes 3 pages and can be loaded in first megabyte of memory
+ * with its end placed between 0 and 640k where BIOS might start.
+ * (see arch/x86/boot/compressed/pgtable_64.c)
+ */
+
+#ifdef CONFIG_64BIT
+static efi_status_t prepare_trampoline(void)
+{
+	efi_status_t status;
+
+	status = efi_allocate_pages(TRAMPOLINE_32BIT_SIZE,
+				    (unsigned long *)&trampoline_32bit,
+				    TRAMPOLINE_32BIT_PLACEMENT_MAX);
+
+	if (status != EFI_SUCCESS)
+		return status;
+
+	unsigned long trampoline_start = (unsigned long)trampoline_32bit;
+
+	memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
+
+	if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
+		/* First page of trampoline is a top level page table */
+		efi_adjust_memory_range_protection(trampoline_start,
+						   PAGE_SIZE,
+						   EFI_MEMORY_XP);
+	}
+
+	/* Second page of trampoline is the code (with a padding) */
+
+	void *caddr = (void *)trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET;
+
+	memcpy(caddr, trampoline_32bit_src, TRAMPOLINE_32BIT_CODE_SIZE);
+
+	if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
+		efi_adjust_memory_range_protection((unsigned long)caddr,
+						   PAGE_SIZE,
+						   EFI_MEMORY_RO);
+
+		/* And the last page of trampoline is the stack */
+
+		efi_adjust_memory_range_protection(trampoline_start + 2 * PAGE_SIZE,
+						   PAGE_SIZE,
+						   EFI_MEMORY_XP);
+	}
+
+	return EFI_SUCCESS;
+}
+#else
+static inline efi_status_t prepare_trampoline(void)
+{
+	return EFI_SUCCESS;
+}
+#endif
+
+static efi_status_t init_loader_data(efi_handle_t handle,
+				     struct boot_params *params,
+				     struct efi_boot_memmap **map)
+{
+	struct efi_info *efi = (void *)&params->efi_info;
+	efi_status_t status;
+
+	status = efi_get_memory_map(map, false);
+
+	if (status != EFI_SUCCESS) {
+		efi_err("Unable to get EFI memory map...\n");
+		return status;
+	}
+
+	const char *signature = efi_is_64bit() ? EFI64_LOADER_SIGNATURE
+					       : EFI32_LOADER_SIGNATURE;
+
+	memcpy(&efi->efi_loader_signature, signature, sizeof(__u32));
+
+	efi->efi_memdesc_size = (*map)->desc_size;
+	efi->efi_memdesc_version = (*map)->desc_ver;
+	efi->efi_memmap_size = (*map)->map_size;
+
+	efi_set_u64_split((unsigned long)(*map)->map,
+			  &efi->efi_memmap, &efi->efi_memmap_hi);
+
+	efi_set_u64_split((unsigned long)efi_system_table,
+			  &efi->efi_systab, &efi->efi_systab_hi);
+
+	image_handle = handle;
+
+	return EFI_SUCCESS;
+}
+
+static void free_loader_data(struct boot_params *params, struct efi_boot_memmap *map)
+{
+	struct efi_info *efi = (void *)&params->efi_info;
+
+	efi_bs_call(free_pool, map);
+
+	efi->efi_memdesc_size = 0;
+	efi->efi_memdesc_version = 0;
+	efi->efi_memmap_size = 0;
+	efi_set_u64_split(0, &efi->efi_memmap, &efi->efi_memmap_hi);
+}
+
+extern unsigned char input_data[];
+extern unsigned int input_len, output_len;
+
+unsigned long extract_kernel_direct(efi_handle_t handle, struct boot_params *params)
+{
+
+	void *res;
+	efi_status_t status;
+	struct efi_extract_callbacks cb = { 0 };
+
+	status = prepare_trampoline();
+
+	if (status != EFI_SUCCESS)
+		return 0;
+
+	/* Prepare environment for do_extract_kernel() call */
+	struct efi_boot_memmap *map = NULL;
+	status = init_loader_data(handle, params, &map);
+
+	if (status != EFI_SUCCESS)
+		return 0;
+
+	cb.puthex = do_puthex;
+	cb.putstr = do_putstr;
+	cb.map_range = do_map_range;
+
+	res = efi_extract_kernel(params, &cb, input_data, input_len, output_len);
+
+	free_loader_data(params, map);
+
+	return (unsigned long)res;
+}
diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index 7fb1eff88a18..1d1ab1911fd3 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -17,6 +17,7 @@
 #include <asm/boot.h>
 
 #include "efistub.h"
+#include "x86-stub.h"
 
 /* Maximum physical address for 64-bit kernel with 4-level paging */
 #define MAXMEM_X86_64_4LEVEL (1ull << 46)
@@ -24,7 +25,7 @@
 const efi_system_table_t *efi_system_table;
 const efi_dxe_services_table_t *efi_dxe_table;
 u32 image_offset __section(".data");
-static efi_loaded_image_t *image = NULL;
+static efi_loaded_image_t *image __section(".data");
 
 static efi_status_t
 preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom)
@@ -212,55 +213,9 @@ static void retrieve_apple_device_properties(struct boot_params *boot_params)
 	}
 }
 
-/*
- * Trampoline takes 2 pages and can be loaded in first megabyte of memory
- * with its end placed between 128k and 640k where BIOS might start.
- * (see arch/x86/boot/compressed/pgtable_64.c)
- *
- * We cannot find exact trampoline placement since memory map
- * can be modified by UEFI, and it can alter the computed address.
- */
-
-#define TRAMPOLINE_PLACEMENT_BASE ((128 - 8)*1024)
-#define TRAMPOLINE_PLACEMENT_SIZE (640*1024 - (128 - 8)*1024)
-
-void startup_32(struct boot_params *boot_params);
-
-static void
-setup_memory_protection(unsigned long image_base, unsigned long image_size)
-{
-	/*
-	 * Allow execution of possible trampoline used
-	 * for switching between 4- and 5-level page tables
-	 * and relocated kernel image.
-	 */
-
-	efi_adjust_memory_range_protection(TRAMPOLINE_PLACEMENT_BASE,
-					   TRAMPOLINE_PLACEMENT_SIZE, 0);
-
-#ifdef CONFIG_64BIT
-	if (image_base != (unsigned long)startup_32)
-		efi_adjust_memory_range_protection(image_base, image_size, 0);
-#else
-	/*
-	 * Clear protection flags on a whole range of possible
-	 * addresses used for KASLR. We don't need to do that
-	 * on x86_64, since KASLR/extraction is performed after
-	 * dedicated identity page tables are built and we only
-	 * need to remove possible protection on relocated image
-	 * itself disregarding further relocations.
-	 */
-	efi_adjust_memory_range_protection(LOAD_PHYSICAL_ADDR,
-					   KERNEL_IMAGE_SIZE - LOAD_PHYSICAL_ADDR,
-					   0);
-#endif
-}
-
 static const efi_char16_t apple[] = L"Apple";
 
-static void setup_quirks(struct boot_params *boot_params,
-			 unsigned long image_base,
-			 unsigned long image_size)
+static void setup_quirks(struct boot_params *boot_params)
 {
 	efi_char16_t *fw_vendor = (efi_char16_t *)(unsigned long)
 		efi_table_attr(efi_system_table, fw_vendor);
@@ -269,9 +224,6 @@ static void setup_quirks(struct boot_params *boot_params,
 		if (IS_ENABLED(CONFIG_APPLE_PROPERTIES))
 			retrieve_apple_device_properties(boot_params);
 	}
-
-	if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES))
-		setup_memory_protection(image_base, image_size);
 }
 
 /*
@@ -384,7 +336,7 @@ static void setup_graphics(struct boot_params *boot_params)
 }
 
 
-static void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
+void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
 {
 	efi_bs_call(exit, handle, status, 0, NULL);
 	for(;;)
@@ -707,8 +659,7 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle)
 }
 
 /*
- * On success, we return the address of startup_32, which has potentially been
- * relocated by efi_relocate_kernel.
+ * On success, we return extracted kernel entry point.
  * On failure, we exit to the firmware via efi_exit instead of returning.
  */
 asmlinkage unsigned long efi_main(efi_handle_t handle,
@@ -733,60 +684,6 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
 		efi_dxe_table = NULL;
 	}
 
-	/*
-	 * If the kernel isn't already loaded at a suitable address,
-	 * relocate it.
-	 *
-	 * It must be loaded above LOAD_PHYSICAL_ADDR.
-	 *
-	 * The maximum address for 64-bit is 1 << 46 for 4-level paging. This
-	 * is defined as the macro MAXMEM, but unfortunately that is not a
-	 * compile-time constant if 5-level paging is configured, so we instead
-	 * define our own macro for use here.
-	 *
-	 * For 32-bit, the maximum address is complicated to figure out, for
-	 * now use KERNEL_IMAGE_SIZE, which will be 512MiB, the same as what
-	 * KASLR uses.
-	 *
-	 * Also relocate it if image_offset is zero, i.e. the kernel wasn't
-	 * loaded by LoadImage, but rather by a bootloader that called the
-	 * handover entry. The reason we must always relocate in this case is
-	 * to handle the case of systemd-boot booting a unified kernel image,
-	 * which is a PE executable that contains the bzImage and an initrd as
-	 * COFF sections. The initrd section is placed after the bzImage
-	 * without ensuring that there are at least init_size bytes available
-	 * for the bzImage, and thus the compressed kernel's startup code may
-	 * overwrite the initrd unless it is moved out of the way.
-	 */
-
-	buffer_start = ALIGN(bzimage_addr - image_offset,
-			     hdr->kernel_alignment);
-	buffer_end = buffer_start + hdr->init_size;
-
-	if ((buffer_start < LOAD_PHYSICAL_ADDR)				     ||
-	    (IS_ENABLED(CONFIG_X86_32) && buffer_end > KERNEL_IMAGE_SIZE)    ||
-	    (IS_ENABLED(CONFIG_X86_64) && buffer_end > MAXMEM_X86_64_4LEVEL) ||
-	    (image_offset == 0)) {
-		extern char _bss[];
-
-		status = efi_relocate_kernel(&bzimage_addr,
-					     (unsigned long)_bss - bzimage_addr,
-					     hdr->init_size,
-					     hdr->pref_address,
-					     hdr->kernel_alignment,
-					     LOAD_PHYSICAL_ADDR);
-		if (status != EFI_SUCCESS) {
-			efi_err("efi_relocate_kernel() failed!\n");
-			goto fail;
-		}
-		/*
-		 * Now that we've copied the kernel elsewhere, we no longer
-		 * have a set up block before startup_32(), so reset image_offset
-		 * to zero in case it was set earlier.
-		 */
-		image_offset = 0;
-	}
-
 #ifdef CONFIG_CMDLINE_BOOL
 	status = efi_parse_options(CONFIG_CMDLINE);
 	if (status != EFI_SUCCESS) {
@@ -843,7 +740,11 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
 
 	setup_efi_pci(boot_params);
 
-	setup_quirks(boot_params, bzimage_addr, buffer_end - buffer_start);
+	setup_quirks(boot_params);
+
+	bzimage_addr = extract_kernel_direct(handle, boot_params);
+	if (!bzimage_addr)
+		goto fail;
 
 	status = exit_boot(boot_params, handle);
 	if (status != EFI_SUCCESS) {
diff --git a/drivers/firmware/efi/libstub/x86-stub.h b/drivers/firmware/efi/libstub/x86-stub.h
new file mode 100644
index 000000000000..baecc7c6e602
--- /dev/null
+++ b/drivers/firmware/efi/libstub/x86-stub.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _DRIVERS_FIRMWARE_EFI_X86STUB_H
+#define _DRIVERS_FIRMWARE_EFI_X86STUB_H
+
+#include <linux/efi.h>
+
+#include <asm/bootparam.h>
+
+void __noreturn efi_exit(efi_handle_t handle, efi_status_t status);
+unsigned long extract_kernel_direct(efi_handle_t handle, struct boot_params *boot_params);
+void startup_32(struct boot_params *boot_params);
+
+#endif
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 16/26] x86/boot: Reduce lower limit of physical KASLR
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (14 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 12:38 ` [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub Evgeniy Baskov
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Set lower limit of physical KASLR to 64M.

Previously is was set to 512M when kernel is loaded higher than that.
That prevented physical KASLR from being performed on x86_32, where
upper limit is also set to 512M. The limit is pretty arbitrary, and the
most important is to set it above the ISA hole, i.e. higher than 16M.

It was not that important before, but now kernel is not getting
relocated to the lower address when booting via EFI, exposing the
KASLR failures.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/compressed/kaslr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index c0ee116c4fa2..74d1327adbba 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -852,10 +852,10 @@ void choose_random_location(unsigned long input,
 
 	/*
 	 * Low end of the randomization range should be the
-	 * smaller of 512M or the initial kernel image
+	 * smaller of 64M or the initial kernel image
 	 * location:
 	 */
-	min_addr = min(*output, 512UL << 20);
+	min_addr = min(*output, 64UL << 20);
 	/* Make sure minimum is aligned. */
 	min_addr = ALIGN(min_addr, CONFIG_PHYSICAL_ALIGN);
 
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (15 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 16/26] x86/boot: Reduce lower limit of physical KASLR Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 14:59   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 18/26] tools/include: Add simplified version of pe.h Evgeniy Baskov
                   ` (9 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

This is required to fit more sections in PE section tables,
since its size is restricted by zero page located at specific offset
after the PE header.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/header.S | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
index 9338c68e7413..9fec80bc504b 100644
--- a/arch/x86/boot/header.S
+++ b/arch/x86/boot/header.S
@@ -59,17 +59,16 @@ start2:
 	cld
 
 	movw	$bugger_off_msg, %si
+	movw	$bugger_off_msg_size, %cx
 
 msg_loop:
 	lodsb
-	andb	%al, %al
-	jz	bs_die
 	movb	$0xe, %ah
 	movw	$7, %bx
 	int	$0x10
-	jmp	msg_loop
+	decw	%cx
+	jnz	msg_loop
 
-bs_die:
 	# Allow the user to press a key, then reboot
 	xorw	%ax, %ax
 	int	$0x16
@@ -90,10 +89,9 @@ bs_die:
 
 	.section ".bsdata", "a"
 bugger_off_msg:
-	.ascii	"Use a boot loader.\r\n"
-	.ascii	"\n"
-	.ascii	"Remove disk and press any key to reboot...\r\n"
-	.byte	0
+	.ascii	"Use a boot loader. "
+	.ascii	"Press a key to reboot"
+	.set	bugger_off_msg_size, . - bugger_off_msg
 
 #ifdef CONFIG_EFI_STUB
 pe_header:
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 18/26] tools/include: Add simplified version of pe.h
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (16 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 15:01   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 19/26] x86/build: Cleanup tools/build.c Evgeniy Baskov
                   ` (8 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

This is needed to remove magic numbers from x86 bzImage building tool
(arch/x86/boot/tools/build.c).

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 tools/include/linux/pe.h | 150 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 150 insertions(+)
 create mode 100644 tools/include/linux/pe.h

diff --git a/tools/include/linux/pe.h b/tools/include/linux/pe.h
new file mode 100644
index 000000000000..41c09ec371d8
--- /dev/null
+++ b/tools/include/linux/pe.h
@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Simplified version of include/linux/pe.h:
+ *  Copyright 2011 Red Hat, Inc. All rights reserved.
+ *  Author(s): Peter Jones <pjones@redhat.com>
+ */
+#ifndef __LINUX_PE_H
+#define __LINUX_PE_H
+
+#include <linux/types.h>
+
+#define	IMAGE_FILE_MACHINE_I386		0x014c
+
+#define IMAGE_SCN_CNT_CODE	0x00000020 /* .text */
+#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040 /* .data */
+#define IMAGE_SCN_ALIGN_4096BYTES 0x00d00000
+#define IMAGE_SCN_MEM_DISCARDABLE 0x02000000 /* scn can be discarded */
+#define IMAGE_SCN_MEM_EXECUTE	0x20000000 /* can be executed as code */
+#define IMAGE_SCN_MEM_READ	0x40000000 /* readable */
+#define IMAGE_SCN_MEM_WRITE	0x80000000 /* writeable */
+
+#define MZ_HEADER_PEADDR_OFFSET 0x3c
+
+struct pe_hdr {
+	uint32_t magic;		/* PE magic */
+	uint16_t machine;	/* machine type */
+	uint16_t sections;	/* number of sections */
+	uint32_t timestamp;	/* time_t */
+	uint32_t symbol_table;	/* symbol table offset */
+	uint32_t symbols;	/* number of symbols */
+	uint16_t opt_hdr_size;	/* size of optional header */
+	uint16_t flags;		/* flags */
+};
+
+/* the fact that pe32 isn't padded where pe32+ is 64-bit means union won't
+ * work right.  vomit. */
+struct pe32_opt_hdr {
+	/* "standard" header */
+	uint16_t magic;		/* file type */
+	uint8_t  ld_major;	/* linker major version */
+	uint8_t  ld_minor;	/* linker minor version */
+	uint32_t text_size;	/* size of text section(s) */
+	uint32_t data_size;	/* size of data section(s) */
+	uint32_t bss_size;	/* size of bss section(s) */
+	uint32_t entry_point;	/* file offset of entry point */
+	uint32_t code_base;	/* relative code addr in ram */
+	uint32_t data_base;	/* relative data addr in ram */
+	/* "windows" header */
+	uint32_t image_base;	/* preferred load address */
+	uint32_t section_align;	/* alignment in bytes */
+	uint32_t file_align;	/* file alignment in bytes */
+	uint16_t os_major;	/* major OS version */
+	uint16_t os_minor;	/* minor OS version */
+	uint16_t image_major;	/* major image version */
+	uint16_t image_minor;	/* minor image version */
+	uint16_t subsys_major;	/* major subsystem version */
+	uint16_t subsys_minor;	/* minor subsystem version */
+	uint32_t win32_version;	/* reserved, must be 0 */
+	uint32_t image_size;	/* image size */
+	uint32_t header_size;	/* header size rounded up to
+				   file_align */
+	uint32_t csum;		/* checksum */
+	uint16_t subsys;	/* subsystem */
+	uint16_t dll_flags;	/* more flags! */
+	uint32_t stack_size_req;/* amt of stack requested */
+	uint32_t stack_size;	/* amt of stack required */
+	uint32_t heap_size_req;	/* amt of heap requested */
+	uint32_t heap_size;	/* amt of heap required */
+	uint32_t loader_flags;	/* reserved, must be 0 */
+	uint32_t data_dirs;	/* number of data dir entries */
+};
+
+struct pe32plus_opt_hdr {
+	uint16_t magic;		/* file type */
+	uint8_t  ld_major;	/* linker major version */
+	uint8_t  ld_minor;	/* linker minor version */
+	uint32_t text_size;	/* size of text section(s) */
+	uint32_t data_size;	/* size of data section(s) */
+	uint32_t bss_size;	/* size of bss section(s) */
+	uint32_t entry_point;	/* file offset of entry point */
+	uint32_t code_base;	/* relative code addr in ram */
+	/* "windows" header */
+	uint64_t image_base;	/* preferred load address */
+	uint32_t section_align;	/* alignment in bytes */
+	uint32_t file_align;	/* file alignment in bytes */
+	uint16_t os_major;	/* major OS version */
+	uint16_t os_minor;	/* minor OS version */
+	uint16_t image_major;	/* major image version */
+	uint16_t image_minor;	/* minor image version */
+	uint16_t subsys_major;	/* major subsystem version */
+	uint16_t subsys_minor;	/* minor subsystem version */
+	uint32_t win32_version;	/* reserved, must be 0 */
+	uint32_t image_size;	/* image size */
+	uint32_t header_size;	/* header size rounded up to
+				   file_align */
+	uint32_t csum;		/* checksum */
+	uint16_t subsys;	/* subsystem */
+	uint16_t dll_flags;	/* more flags! */
+	uint64_t stack_size_req;/* amt of stack requested */
+	uint64_t stack_size;	/* amt of stack required */
+	uint64_t heap_size_req;	/* amt of heap requested */
+	uint64_t heap_size;	/* amt of heap required */
+	uint32_t loader_flags;	/* reserved, must be 0 */
+	uint32_t data_dirs;	/* number of data dir entries */
+};
+
+struct data_dirent {
+	uint32_t virtual_address;	/* relative to load address */
+	uint32_t size;
+};
+
+struct data_directory {
+	struct data_dirent exports;		/* .edata */
+	struct data_dirent imports;		/* .idata */
+	struct data_dirent resources;		/* .rsrc */
+	struct data_dirent exceptions;		/* .pdata */
+	struct data_dirent certs;		/* certs */
+	struct data_dirent base_relocations;	/* .reloc */
+	struct data_dirent debug;		/* .debug */
+	struct data_dirent arch;		/* reservered */
+	struct data_dirent global_ptr;		/* global pointer reg. Size=0 */
+	struct data_dirent tls;			/* .tls */
+	struct data_dirent load_config;		/* load configuration structure */
+	struct data_dirent bound_imports;	/* no idea */
+	struct data_dirent import_addrs;	/* import address table */
+	struct data_dirent delay_imports;	/* delay-load import table */
+	struct data_dirent clr_runtime_hdr;	/* .cor (object only) */
+	struct data_dirent reserved;
+};
+
+struct section_header {
+	char name[8];			/* name or "/12\0" string tbl offset */
+	uint32_t virtual_size;		/* size of loaded section in ram */
+	uint32_t virtual_address;	/* relative virtual address */
+	uint32_t raw_data_size;		/* size of the section */
+	uint32_t data_addr;		/* file pointer to first page of sec */
+	uint32_t relocs;		/* file pointer to relocation entries */
+	uint32_t line_numbers;		/* line numbers! */
+	uint16_t num_relocs;		/* number of relocations */
+	uint16_t num_lin_numbers;	/* srsly. */
+	uint32_t flags;
+};
+
+struct coff_reloc {
+	uint32_t virtual_address;
+	uint32_t symbol_table_index;
+	uint16_t data;
+};
+
+#endif /* __LINUX_PE_H */
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 19/26] x86/build: Cleanup tools/build.c
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (17 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 18/26] tools/include: Add simplified version of pe.h Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-09 15:57   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 20/26] x86/build: Make generated PE more spec compliant Evgeniy Baskov
                   ` (7 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Use newer C standard. Since kernel requires C99 compiler now,
we can make use of the new features to make the core more readable.

Use mmap() for reading files also to make things simpler.

Replace most magic numbers with defines.

Should have no functional changes. This is done in preparation for the
next changes that makes generated PE header more spec compliant.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/tools/build.c | 387 +++++++++++++++++++++++-------------
 1 file changed, 245 insertions(+), 142 deletions(-)

diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
index bd247692b701..fbc5315af032 100644
--- a/arch/x86/boot/tools/build.c
+++ b/arch/x86/boot/tools/build.c
@@ -25,20 +25,21 @@
  * Substantially overhauled by H. Peter Anvin, April 2007
  */
 
+#include <fcntl.h>
+#include <stdarg.h>
+#include <stdint.h>
 #include <stdio.h>
-#include <string.h>
 #include <stdlib.h>
-#include <stdarg.h>
-#include <sys/types.h>
+#include <string.h>
+#include <sys/mman.h>
 #include <sys/stat.h>
+#include <sys/types.h>
 #include <unistd.h>
-#include <fcntl.h>
-#include <sys/mman.h>
+
 #include <tools/le_byteshift.h>
+#include <linux/pe.h>
 
-typedef unsigned char  u8;
-typedef unsigned short u16;
-typedef unsigned int   u32;
+#define round_up(x, n) (((x) + (n) - 1) & ~((n) - 1))
 
 #define DEFAULT_MAJOR_ROOT 0
 #define DEFAULT_MINOR_ROOT 0
@@ -48,8 +49,13 @@ typedef unsigned int   u32;
 #define SETUP_SECT_MIN 5
 #define SETUP_SECT_MAX 64
 
+#define PARAGRAPH_SIZE 16
+#define SECTOR_SIZE 512
+#define FILE_ALIGNMENT 512
+#define SECTION_ALIGNMENT 4096
+
 /* This must be large enough to hold the entire setup */
-u8 buf[SETUP_SECT_MAX*512];
+uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
 
 #define PECOFF_RELOC_RESERVE 0x20
 
@@ -59,6 +65,52 @@ u8 buf[SETUP_SECT_MAX*512];
 #define PECOFF_COMPAT_RESERVE 0x0
 #endif
 
+#define RELOC_SECTION_SIZE 10
+
+/* PE header has different format depending on the architecture */
+#ifdef CONFIG_X86_64
+typedef struct pe32plus_opt_hdr pe_opt_hdr;
+#else
+typedef struct pe32_opt_hdr pe_opt_hdr;
+#endif
+
+static inline struct pe_hdr *get_pe_header(uint8_t *buf)
+{
+	uint32_t pe_offset = get_unaligned_le32(buf+MZ_HEADER_PEADDR_OFFSET);
+	return (struct pe_hdr *)(buf + pe_offset);
+}
+
+static inline pe_opt_hdr *get_pe_opt_header(uint8_t *buf)
+{
+	return (pe_opt_hdr *)(get_pe_header(buf) + 1);
+}
+
+static inline struct section_header *get_sections(uint8_t *buf)
+{
+	pe_opt_hdr *hdr = get_pe_opt_header(buf);
+	uint32_t n_data_dirs = get_unaligned_le32(&hdr->data_dirs);
+	uint8_t *sections = (uint8_t *)(hdr + 1) + n_data_dirs*sizeof(struct data_dirent);
+	return  (struct section_header *)sections;
+}
+
+static inline struct data_directory *get_data_dirs(uint8_t *buf)
+{
+	pe_opt_hdr *hdr = get_pe_opt_header(buf);
+	return (struct data_directory *)(hdr + 1);
+}
+
+#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
+#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | IMAGE_SCN_ALIGN_4096BYTES)
+#define SCN_RX (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
+#define SCN_RO (IMAGE_SCN_MEM_READ | IMAGE_SCN_ALIGN_4096BYTES)
+#else
+/* With memory protection disabled all sections are RWX */
+#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | \
+		IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
+#define SCN_RX SCN_RW
+#define SCN_RO SCN_RW
+#endif
+
 static unsigned long efi32_stub_entry;
 static unsigned long efi64_stub_entry;
 static unsigned long efi_pe_entry;
@@ -70,7 +122,7 @@ static unsigned long _end;
 
 /*----------------------------------------------------------------------*/
 
-static const u32 crctab32[] = {
+static const uint32_t crctab32[] = {
 	0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
 	0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
 	0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
@@ -125,12 +177,12 @@ static const u32 crctab32[] = {
 	0x2d02ef8d
 };
 
-static u32 partial_crc32_one(u8 c, u32 crc)
+static uint32_t partial_crc32_one(uint8_t c, uint32_t crc)
 {
 	return crctab32[(crc ^ c) & 0xff] ^ (crc >> 8);
 }
 
-static u32 partial_crc32(const u8 *s, int len, u32 crc)
+static uint32_t partial_crc32(const uint8_t *s, int len, uint32_t crc)
 {
 	while (len--)
 		crc = partial_crc32_one(*s++, crc);
@@ -152,57 +204,106 @@ static void usage(void)
 	die("Usage: build setup system zoffset.h image");
 }
 
+static void *map_file(const char *path, size_t *psize)
+{
+	struct stat statbuf;
+	size_t size;
+	void *addr;
+	int fd;
+
+	fd = open(path, O_RDONLY);
+	if (fd < 0)
+		die("Unable to open `%s': %m", path);
+	if (fstat(fd, &statbuf))
+		die("Unable to stat `%s': %m", path);
+
+	size = statbuf.st_size;
+	/*
+	 * Map one byte more, to allow adding null-terminator
+	 * for text files.
+	 */
+	addr = mmap(NULL, size + 1, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
+	if (addr == MAP_FAILED)
+		die("Unable to mmap '%s': %m", path);
+
+	close(fd);
+
+	*psize = size;
+	return addr;
+}
+
+static void unmap_file(void *addr, size_t size)
+{
+	munmap(addr, size + 1);
+}
+
+static void *map_output_file(const char *path, size_t size)
+{
+	void *addr;
+	int fd;
+
+	fd = open(path, O_RDWR | O_CREAT, 0660);
+	if (fd < 0)
+		die("Unable to create `%s': %m", path);
+
+	if (ftruncate(fd, size))
+		die("Unable to resize `%s': %m", path);
+
+	addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+	if (addr == MAP_FAILED)
+		die("Unable to mmap '%s': %m", path);
+
+	return addr;
+}
+
 #ifdef CONFIG_EFI_STUB
 
-static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 size, u32 datasz, u32 offset)
+static void update_pecoff_section_header_fields(char *section_name, uint32_t vma,
+						uint32_t size, uint32_t datasz,
+						uint32_t offset)
 {
 	unsigned int pe_header;
 	unsigned short num_sections;
-	u8 *section;
+	struct section_header *section;
 
-	pe_header = get_unaligned_le32(&buf[0x3c]);
-	num_sections = get_unaligned_le16(&buf[pe_header + 6]);
-
-#ifdef CONFIG_X86_32
-	section = &buf[pe_header + 0xa8];
-#else
-	section = &buf[pe_header + 0xb8];
-#endif
+	struct pe_hdr *hdr = get_pe_header(buf);
+	num_sections = get_unaligned_le16(&hdr->sections);
+	section = get_sections(buf);
 
 	while (num_sections > 0) {
-		if (strncmp((char*)section, section_name, 8) == 0) {
+		if (strncmp(section->name, section_name, 8) == 0) {
 			/* section header size field */
-			put_unaligned_le32(size, section + 0x8);
+			put_unaligned_le32(size, &section->virtual_size);
 
 			/* section header vma field */
-			put_unaligned_le32(vma, section + 0xc);
+			put_unaligned_le32(vma, &section->virtual_address);
 
 			/* section header 'size of initialised data' field */
-			put_unaligned_le32(datasz, section + 0x10);
+			put_unaligned_le32(datasz, &section->raw_data_size);
 
 			/* section header 'file offset' field */
-			put_unaligned_le32(offset, section + 0x14);
+			put_unaligned_le32(offset, &section->data_addr);
 
 			break;
 		}
-		section += 0x28;
+		section++;
 		num_sections--;
 	}
 }
 
-static void update_pecoff_section_header(char *section_name, u32 offset, u32 size)
+static void update_pecoff_section_header(char *section_name, uint32_t offset, uint32_t size)
 {
 	update_pecoff_section_header_fields(section_name, offset, size, size, offset);
 }
 
 static void update_pecoff_setup_and_reloc(unsigned int size)
 {
-	u32 setup_offset = 0x200;
-	u32 reloc_offset = size - PECOFF_RELOC_RESERVE - PECOFF_COMPAT_RESERVE;
+	uint32_t setup_offset = SECTOR_SIZE;
+	uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE - PECOFF_COMPAT_RESERVE;
 #ifdef CONFIG_EFI_MIXED
-	u32 compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
+	uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
 #endif
-	u32 setup_size = reloc_offset - setup_offset;
+	uint32_t setup_size = reloc_offset - setup_offset;
 
 	update_pecoff_section_header(".setup", setup_offset, setup_size);
 	update_pecoff_section_header(".reloc", reloc_offset, PECOFF_RELOC_RESERVE);
@@ -211,8 +312,8 @@ static void update_pecoff_setup_and_reloc(unsigned int size)
 	 * Modify .reloc section contents with a single entry. The
 	 * relocation is applied to offset 10 of the relocation section.
 	 */
-	put_unaligned_le32(reloc_offset + 10, &buf[reloc_offset]);
-	put_unaligned_le32(10, &buf[reloc_offset + 4]);
+	put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, &buf[reloc_offset]);
+	put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset + 4]);
 
 #ifdef CONFIG_EFI_MIXED
 	update_pecoff_section_header(".compat", compat_offset, PECOFF_COMPAT_RESERVE);
@@ -224,19 +325,17 @@ static void update_pecoff_setup_and_reloc(unsigned int size)
 	 */
 	buf[compat_offset] = 0x1;
 	buf[compat_offset + 1] = 0x8;
-	put_unaligned_le16(0x14c, &buf[compat_offset + 2]);
+	put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset + 2]);
 	put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset + 4]);
 #endif
 }
 
-static void update_pecoff_text(unsigned int text_start, unsigned int file_sz,
+static unsigned int update_pecoff_sections(unsigned int text_start, unsigned int text_sz,
 			       unsigned int init_sz)
 {
-	unsigned int pe_header;
-	unsigned int text_sz = file_sz - text_start;
+	unsigned int file_sz = text_start + text_sz;
 	unsigned int bss_sz = init_sz - file_sz;
-
-	pe_header = get_unaligned_le32(&buf[0x3c]);
+	pe_opt_hdr *hdr = get_pe_opt_header(buf);
 
 	/*
 	 * The PE/COFF loader may load the image at an address which is
@@ -254,18 +353,20 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz,
 	 * Size of code: Subtract the size of the first sector (512 bytes)
 	 * which includes the header.
 	 */
-	put_unaligned_le32(file_sz - 512 + bss_sz, &buf[pe_header + 0x1c]);
+	put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz, &hdr->text_size);
 
 	/* Size of image */
-	put_unaligned_le32(init_sz, &buf[pe_header + 0x50]);
+	put_unaligned_le32(init_sz, &hdr->image_size);
 
 	/*
 	 * Address of entry point for PE/COFF executable
 	 */
-	put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 0x28]);
+	put_unaligned_le32(text_start + efi_pe_entry, &hdr->entry_point);
 
 	update_pecoff_section_header_fields(".text", text_start, text_sz + bss_sz,
 					    text_sz, text_start);
+
+	return text_start + file_sz;
 }
 
 static int reserve_pecoff_reloc_section(int c)
@@ -275,7 +376,7 @@ static int reserve_pecoff_reloc_section(int c)
 	return PECOFF_RELOC_RESERVE;
 }
 
-static void efi_stub_defaults(void)
+static void efi_stub_update_defaults(void)
 {
 	/* Defaults for old kernel */
 #ifdef CONFIG_X86_32
@@ -298,7 +399,7 @@ static void efi_stub_entry_update(void)
 
 #ifdef CONFIG_EFI_MIXED
 	if (efi32_stub_entry != addr)
-		die("32-bit and 64-bit EFI entry points do not match\n");
+		die("32-bit and 64-bit EFI entry points do not match");
 #endif
 #endif
 	put_unaligned_le32(addr, &buf[0x264]);
@@ -310,7 +411,7 @@ static inline void update_pecoff_setup_and_reloc(unsigned int size) {}
 static inline void update_pecoff_text(unsigned int text_start,
 				      unsigned int file_sz,
 				      unsigned int init_sz) {}
-static inline void efi_stub_defaults(void) {}
+static inline void efi_stub_update_defaults(void) {}
 static inline void efi_stub_entry_update(void) {}
 
 static inline int reserve_pecoff_reloc_section(int c)
@@ -338,20 +439,15 @@ static int reserve_pecoff_compat_section(int c)
 
 static void parse_zoffset(char *fname)
 {
-	FILE *file;
-	char *p;
-	int c;
+	size_t size;
+	char *data, *p;
 
-	file = fopen(fname, "r");
-	if (!file)
-		die("Unable to open `%s': %m", fname);
-	c = fread(buf, 1, sizeof(buf) - 1, file);
-	if (ferror(file))
-		die("read-error on `zoffset.h'");
-	fclose(file);
-	buf[c] = 0;
+	data = map_file(fname, &size);
 
-	p = (char *)buf;
+	/* We can do that, since we mapped one byte more */
+	data[size] = 0;
+
+	p = (char *)data;
 
 	while (p && *p) {
 		PARSE_ZOFS(p, efi32_stub_entry);
@@ -367,82 +463,99 @@ static void parse_zoffset(char *fname)
 		while (p && (*p == '\r' || *p == '\n'))
 			p++;
 	}
+
+	unmap_file(data, size);
 }
 
-int main(int argc, char ** argv)
+static unsigned int read_setup(char *path)
 {
-	unsigned int i, sz, setup_sectors, init_sz;
-	int c;
-	u32 sys_size;
-	struct stat sb;
-	FILE *file, *dest;
-	int fd;
-	void *kernel;
-	u32 crc = 0xffffffffUL;
-
-	efi_stub_defaults();
-
-	if (argc != 5)
-		usage();
-	parse_zoffset(argv[3]);
-
-	dest = fopen(argv[4], "w");
-	if (!dest)
-		die("Unable to write `%s': %m", argv[4]);
+	FILE *file;
+	unsigned int setup_size, file_size;
 
 	/* Copy the setup code */
-	file = fopen(argv[1], "r");
+	file = fopen(path, "r");
 	if (!file)
-		die("Unable to open `%s': %m", argv[1]);
-	c = fread(buf, 1, sizeof(buf), file);
+		die("Unable to open `%s': %m", path);
+
+	file_size = fread(buf, 1, sizeof(buf), file);
 	if (ferror(file))
 		die("read-error on `setup'");
-	if (c < 1024)
+
+	if (file_size < 2 * SECTOR_SIZE)
 		die("The setup must be at least 1024 bytes");
-	if (get_unaligned_le16(&buf[510]) != 0xAA55)
+
+	if (get_unaligned_le16(&buf[SECTOR_SIZE - 2]) != 0xAA55)
 		die("Boot block hasn't got boot flag (0xAA55)");
+
 	fclose(file);
 
-	c += reserve_pecoff_compat_section(c);
-	c += reserve_pecoff_reloc_section(c);
+	/* Reserve space for PE sections */
+	file_size += reserve_pecoff_compat_section(file_size);
+	file_size += reserve_pecoff_reloc_section(file_size);
 
 	/* Pad unused space with zeros */
-	setup_sectors = (c + 511) / 512;
-	if (setup_sectors < SETUP_SECT_MIN)
-		setup_sectors = SETUP_SECT_MIN;
-	i = setup_sectors*512;
-	memset(buf+c, 0, i-c);
 
-	update_pecoff_setup_and_reloc(i);
+	setup_size = round_up(file_size, SECTOR_SIZE);
+
+	if (setup_size < SETUP_SECT_MIN * SECTOR_SIZE)
+		setup_size = SETUP_SECT_MIN * SECTOR_SIZE;
+
+	/*
+	 * Global buffer is already initialised
+	 * to 0, but just in case, zero out padding.
+	 */
+
+	memset(buf + file_size, 0, setup_size - file_size);
+
+	return setup_size;
+}
+
+int main(int argc, char **argv)
+{
+	size_t kern_file_size;
+	unsigned int setup_size;
+	unsigned int setup_sectors;
+	unsigned int init_size;
+	unsigned int total_size;
+	unsigned int kern_size;
+	void *kernel;
+	uint32_t crc = 0xffffffffUL;
+	uint8_t *output;
+
+	if (argc != 5)
+		usage();
+
+	efi_stub_update_defaults();
+	parse_zoffset(argv[3]);
+
+	setup_size = read_setup(argv[1]);
+
+	setup_sectors = setup_size/SECTOR_SIZE;
 
 	/* Set the default root device */
 	put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]);
 
-	/* Open and stat the kernel file */
-	fd = open(argv[2], O_RDONLY);
-	if (fd < 0)
-		die("Unable to open `%s': %m", argv[2]);
-	if (fstat(fd, &sb))
-		die("Unable to stat `%s': %m", argv[2]);
-	sz = sb.st_size;
-	kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0);
-	if (kernel == MAP_FAILED)
-		die("Unable to mmap '%s': %m", argv[2]);
-	/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
-	sys_size = (sz + 15 + 4) / 16;
+	/* Map kernel file to memory */
+	kernel = map_file(argv[2], &kern_file_size);
+
 #ifdef CONFIG_EFI_STUB
-	/*
-	 * COFF requires minimum 32-byte alignment of sections, and
-	 * adding a signature is problematic without that alignment.
-	 */
-	sys_size = (sys_size + 1) & ~1;
+	/* PE specification require 512-byte minimum section file alignment */
+	kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
+	update_pecoff_setup_and_reloc(setup_size);
+#else
+	/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
+	kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
 #endif
 
 	/* Patch the setup code with the appropriate size parameters */
-	buf[0x1f1] = setup_sectors-1;
-	put_unaligned_le32(sys_size, &buf[0x1f4]);
+	buf[0x1f1] = setup_sectors - 1;
+	put_unaligned_le32(kern_size/PARAGRAPH_SIZE, &buf[0x1f4]);
+
+	/* Update kernel_info offset. */
+	put_unaligned_le32(kernel_info, &buf[0x268]);
+
+	init_size = get_unaligned_le32(&buf[0x260]);
 
-	init_sz = get_unaligned_le32(&buf[0x260]);
 #ifdef CONFIG_EFI_STUB
 	/*
 	 * The decompression buffer will start at ImageBase. When relocating
@@ -458,45 +571,35 @@ int main(int argc, char ** argv)
 	 * For future-proofing, increase init_sz if necessary.
 	 */
 
-	if (init_sz - _end < i + _ehead) {
-		init_sz = (i + _ehead + _end + 4095) & ~4095;
-		put_unaligned_le32(init_sz, &buf[0x260]);
+	if (init_size - _end < setup_size + _ehead) {
+		init_size = round_up(setup_size + _ehead + _end, SECTION_ALIGNMENT);
+		put_unaligned_le32(init_size, &buf[0x260]);
 	}
-#endif
-	update_pecoff_text(setup_sectors * 512, i + (sys_size * 16), init_sz);
 
-	efi_stub_entry_update();
-
-	/* Update kernel_info offset. */
-	put_unaligned_le32(kernel_info, &buf[0x268]);
+	total_size = update_pecoff_sections(setup_size, kern_size, init_size);
 
-	crc = partial_crc32(buf, i, crc);
-	if (fwrite(buf, 1, i, dest) != i)
-		die("Writing setup failed");
+	efi_stub_entry_update();
+#else
+	(void)init_size;
+	total_size = setup_size + kern_size;
+#endif
 
-	/* Copy the kernel code */
-	crc = partial_crc32(kernel, sz, crc);
-	if (fwrite(kernel, 1, sz, dest) != sz)
-		die("Writing kernel failed");
+	output = map_output_file(argv[4], total_size);
 
-	/* Add padding leaving 4 bytes for the checksum */
-	while (sz++ < (sys_size*16) - 4) {
-		crc = partial_crc32_one('\0', crc);
-		if (fwrite("\0", 1, 1, dest) != 1)
-			die("Writing padding failed");
-	}
+	memcpy(output, buf, setup_size);
+	memcpy(output + setup_size, kernel, kern_file_size);
+	memset(output + setup_size + kern_file_size, 0, kern_size - kern_file_size);
 
-	/* Write the CRC */
-	put_unaligned_le32(crc, buf);
-	if (fwrite(buf, 1, 4, dest) != 4)
-		die("Writing CRC failed");
+	/* Calculate and write kernel checksum. */
+	crc = partial_crc32(output, total_size - 4, crc);
+	put_unaligned_le32(crc, &output[total_size - 4]);
 
-	/* Catch any delayed write failures */
-	if (fclose(dest))
-		die("Writing image failed");
+	/* Catch any delayed write failures. */
+	if (munmap(output, total_size) < 0)
+		die("Writing kernel failed");
 
-	close(fd);
+	unmap_file(kernel, kern_file_size);
 
-	/* Everything is OK */
+	/* Everything is OK. */
 	return 0;
 }
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 20/26] x86/build: Make generated PE more spec compliant
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (18 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 19/26] x86/build: Cleanup tools/build.c Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 15:17   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes Evgeniy Baskov
                   ` (6 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Currently kernel image is not fully compliant PE image, so it may
fail to boot with stricter implementations of UEFI PE loaders.

Set minimal alignments and sizes specified by PE documentation [1]
referenced by UEFI specification [2]. Align PE header to 8 bytes.

Generate PE sections dynamically. This simplifies code, since with
current implementation all of the sections needs to be defined in
header.S, where most section header fields do not hold valid values,
except for their names. Before the change, it also held flags,
but now flags depend on kernel configuration and it is simpler
to set them from build.c too.

Setup sections protection. Since we cannot fit every needed section,
set a part of protection flags dynamically during initialization.
This step is omitted if CONFIG_EFI_DXE_MEM_ATTRIBUTES is not set.

[1] https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
[2] https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf

Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/boot/Makefile                  |   2 +-
 arch/x86/boot/header.S                  |  96 +--------
 arch/x86/boot/tools/build.c             | 270 +++++++++++++-----------
 drivers/firmware/efi/libstub/x86-stub.c |   7 +-
 4 files changed, 161 insertions(+), 214 deletions(-)

diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index 9e38ffaadb5d..bed78c82238e 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -91,7 +91,7 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
 
 SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
 
-sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [a-zA-Z] \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|efi32_pe_entry\|input_data\|kernel_info\|_end\|_ehead\|_text\|z_.*\)$$/\#define ZO_\2 0x\1/p'
+sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [a-zA-Z] \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|efi32_pe_entry\|input_data\|kernel_info\|_end\|_ehead\|_text\|_rodata\|z_.*\)$$/\#define ZO_\2 0x\1/p'
 
 quiet_cmd_zoffset = ZOFFSET $@
       cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
index 9fec80bc504b..07e31ddb074f 100644
--- a/arch/x86/boot/header.S
+++ b/arch/x86/boot/header.S
@@ -94,6 +94,7 @@ bugger_off_msg:
 	.set	bugger_off_msg_size, . - bugger_off_msg
 
 #ifdef CONFIG_EFI_STUB
+	.align 8
 pe_header:
 	.long	PE_MAGIC
 
@@ -107,7 +108,7 @@ coff_header:
 	.set	pe_opt_magic, PE_OPT_MAGIC_PE32PLUS
 	.word	IMAGE_FILE_MACHINE_AMD64
 #endif
-	.word	section_count			# nr_sections
+	.word	0				# nr_sections
 	.long	0 				# TimeDateStamp
 	.long	0				# PointerToSymbolTable
 	.long	1				# NumberOfSymbols
@@ -131,7 +132,7 @@ optional_header:
 	# Filled in by build.c
 	.long	0x0000				# AddressOfEntryPoint
 
-	.long	0x0200				# BaseOfCode
+	.long	0x1000				# BaseOfCode
 #ifdef CONFIG_X86_32
 	.long	0				# data
 #endif
@@ -144,8 +145,8 @@ extra_header_fields:
 #else
 	.quad	image_base			# ImageBase
 #endif
-	.long	0x20				# SectionAlignment
-	.long	0x20				# FileAlignment
+	.long	0x1000				# SectionAlignment
+	.long	0x200				# FileAlignment
 	.word	0				# MajorOperatingSystemVersion
 	.word	0				# MinorOperatingSystemVersion
 	.word	LINUX_EFISTUB_MAJOR_VERSION	# MajorImageVersion
@@ -188,91 +189,14 @@ extra_header_fields:
 	.quad	0				# CertificationTable
 	.quad	0				# BaseRelocationTable
 
-	# Section table
-section_table:
-	#
-	# The offset & size fields are filled in by build.c.
-	#
-	.ascii	".setup"
-	.byte	0
-	.byte	0
-	.long	0
-	.long	0x0				# startup_{32,64}
-	.long	0				# Size of initialized data
-						# on disk
-	.long	0x0				# startup_{32,64}
-	.long	0				# PointerToRelocations
-	.long	0				# PointerToLineNumbers
-	.word	0				# NumberOfRelocations
-	.word	0				# NumberOfLineNumbers
-	.long	IMAGE_SCN_CNT_CODE		| \
-		IMAGE_SCN_MEM_READ		| \
-		IMAGE_SCN_MEM_EXECUTE		| \
-		IMAGE_SCN_ALIGN_16BYTES		# Characteristics
-
-	#
-	# The EFI application loader requires a relocation section
-	# because EFI applications must be relocatable. The .reloc
-	# offset & size fields are filled in by build.c.
 	#
-	.ascii	".reloc"
-	.byte	0
-	.byte	0
-	.long	0
-	.long	0
-	.long	0				# SizeOfRawData
-	.long	0				# PointerToRawData
-	.long	0				# PointerToRelocations
-	.long	0				# PointerToLineNumbers
-	.word	0				# NumberOfRelocations
-	.word	0				# NumberOfLineNumbers
-	.long	IMAGE_SCN_CNT_INITIALIZED_DATA	| \
-		IMAGE_SCN_MEM_READ		| \
-		IMAGE_SCN_MEM_DISCARDABLE	| \
-		IMAGE_SCN_ALIGN_1BYTES		# Characteristics
-
-#ifdef CONFIG_EFI_MIXED
-	#
-	# The offset & size fields are filled in by build.c.
+	# Section table
+	# It is generated by build.c and here we just need
+	# to reserve some space for sections
 	#
-	.asciz	".compat"
-	.long	0
-	.long	0x0
-	.long	0				# Size of initialized data
-						# on disk
-	.long	0x0
-	.long	0				# PointerToRelocations
-	.long	0				# PointerToLineNumbers
-	.word	0				# NumberOfRelocations
-	.word	0				# NumberOfLineNumbers
-	.long	IMAGE_SCN_CNT_INITIALIZED_DATA	| \
-		IMAGE_SCN_MEM_READ		| \
-		IMAGE_SCN_MEM_DISCARDABLE	| \
-		IMAGE_SCN_ALIGN_1BYTES		# Characteristics
-#endif
+section_table:
+	.fill 40*5, 1, 0
 
-	#
-	# The offset & size fields are filled in by build.c.
-	#
-	.ascii	".text"
-	.byte	0
-	.byte	0
-	.byte	0
-	.long	0
-	.long	0x0				# startup_{32,64}
-	.long	0				# Size of initialized data
-						# on disk
-	.long	0x0				# startup_{32,64}
-	.long	0				# PointerToRelocations
-	.long	0				# PointerToLineNumbers
-	.word	0				# NumberOfRelocations
-	.word	0				# NumberOfLineNumbers
-	.long	IMAGE_SCN_CNT_CODE		| \
-		IMAGE_SCN_MEM_READ		| \
-		IMAGE_SCN_MEM_EXECUTE		| \
-		IMAGE_SCN_ALIGN_16BYTES		# Characteristics
-
-	.set	section_count, (. - section_table) / 40
 #endif /* CONFIG_EFI_STUB */
 
 	# Kernel attributes; used by setup.  This is part 1 of the
diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
index fbc5315af032..ac6159b76a13 100644
--- a/arch/x86/boot/tools/build.c
+++ b/arch/x86/boot/tools/build.c
@@ -61,8 +61,10 @@ uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
 
 #ifdef CONFIG_EFI_MIXED
 #define PECOFF_COMPAT_RESERVE 0x20
+#define COMPAT_SECTION_SIZE 0x8
 #else
 #define PECOFF_COMPAT_RESERVE 0x0
+#define COMPAT_SECTION_SIZE 0x0
 #endif
 
 #define RELOC_SECTION_SIZE 10
@@ -117,6 +119,7 @@ static unsigned long efi_pe_entry;
 static unsigned long efi32_pe_entry;
 static unsigned long kernel_info;
 static unsigned long startup_64;
+static unsigned long _rodata;
 static unsigned long _ehead;
 static unsigned long _end;
 
@@ -258,122 +261,177 @@ static void *map_output_file(const char *path, size_t size)
 
 #ifdef CONFIG_EFI_STUB
 
-static void update_pecoff_section_header_fields(char *section_name, uint32_t vma,
-						uint32_t size, uint32_t datasz,
-						uint32_t offset)
+static unsigned int reloc_offset;
+static unsigned int compat_offset;
+
+#define MAX_SECTIONS 5
+
+static void emit_pecoff_section(const char *section_name, unsigned int size,
+				unsigned int bss, unsigned int *file_offset,
+				unsigned int *mem_offset, uint32_t flags)
 {
-	unsigned int pe_header;
+	unsigned int section_memsz, section_filesz;
+	unsigned int name_len;
 	unsigned short num_sections;
+	struct pe_hdr *hdr = get_pe_header(buf);
 	struct section_header *section;
 
-	struct pe_hdr *hdr = get_pe_header(buf);
 	num_sections = get_unaligned_le16(&hdr->sections);
-	section = get_sections(buf);
+	if (num_sections >= MAX_SECTIONS)
+		die("Not enough space to generate all sections");
 
-	while (num_sections > 0) {
-		if (strncmp(section->name, section_name, 8) == 0) {
-			/* section header size field */
-			put_unaligned_le32(size, &section->virtual_size);
+	section = get_sections(buf) + num_sections;
 
-			/* section header vma field */
-			put_unaligned_le32(vma, &section->virtual_address);
+	if ((size & (FILE_ALIGNMENT - 1)) || (bss & (FILE_ALIGNMENT - 1)))
+		die("Section '%s' is improperly aligned", section_name);
 
-			/* section header 'size of initialised data' field */
-			put_unaligned_le32(datasz, &section->raw_data_size);
+	section_memsz = round_up(size + bss, SECTION_ALIGNMENT);
+	section_filesz = round_up(size, FILE_ALIGNMENT);
 
-			/* section header 'file offset' field */
-			put_unaligned_le32(offset, &section->data_addr);
+	/* Zero out all section fields */
+	memset(section, 0, sizeof(*section));
 
-			break;
-		}
-		section++;
-		num_sections--;
-	}
-}
+	name_len = strlen(section_name);
+	if (name_len > sizeof(section->name))
+		name_len = sizeof(section_name);
 
-static void update_pecoff_section_header(char *section_name, uint32_t offset, uint32_t size)
-{
-	update_pecoff_section_header_fields(section_name, offset, size, size, offset);
+	/* Section header size field */
+	memcpy(section->name, section_name, name_len);
+
+	put_unaligned_le32(section_memsz, &section->virtual_size);
+	put_unaligned_le32(*mem_offset, &section->virtual_address);
+	put_unaligned_le32(section_filesz, &section->raw_data_size);
+	put_unaligned_le32(*file_offset, &section->data_addr);
+	put_unaligned_le32(flags, &section->flags);
+
+	put_unaligned_le16(num_sections + 1, &hdr->sections);
+
+	*mem_offset += section_memsz;
+	*file_offset += section_filesz;
 }
 
-static void update_pecoff_setup_and_reloc(unsigned int size)
+#define BASE_RVA 0x1000
+
+static unsigned int text_rva;
+
+static unsigned int update_pecoff_sections(unsigned int setup_size,
+					   unsigned int file_size,
+					   unsigned int virt_size,
+					   unsigned int text_size)
 {
-	uint32_t setup_offset = SECTOR_SIZE;
-	uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE - PECOFF_COMPAT_RESERVE;
-#ifdef CONFIG_EFI_MIXED
-	uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
-#endif
-	uint32_t setup_size = reloc_offset - setup_offset;
+	/* First section starts at 512 byes, after PE header */
+	unsigned int mem_offset = BASE_RVA, file_offset = SECTOR_SIZE;
+	unsigned int compat_size, reloc_size;
+	unsigned int bss_size, text_rva_diff, reloc_rva;
+	pe_opt_hdr  *opt_hdr = get_pe_opt_header(buf);
+	struct pe_hdr *hdr = get_pe_header(buf);
+	struct data_dirent *base_reloc;
+
+	if (get_unaligned_le32(&hdr->sections))
+		die("Some sections present in PE file");
 
-	update_pecoff_section_header(".setup", setup_offset, setup_size);
-	update_pecoff_section_header(".reloc", reloc_offset, PECOFF_RELOC_RESERVE);
+	reloc_size = round_up(RELOC_SECTION_SIZE, FILE_ALIGNMENT);
+	compat_size = round_up(COMPAT_SECTION_SIZE, FILE_ALIGNMENT);
+	virt_size = round_up(virt_size, SECTION_ALIGNMENT);
 
 	/*
-	 * Modify .reloc section contents with a single entry. The
-	 * relocation is applied to offset 10 of the relocation section.
+	 * Update sections offsets.
+	 * NOTE: Order is important
 	 */
-	put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, &buf[reloc_offset]);
-	put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset + 4]);
 
+	bss_size = virt_size - file_size;
+
+	emit_pecoff_section(".setup", setup_size - SECTOR_SIZE, 0,
+			    &file_offset, &mem_offset, SCN_RO |
+			    IMAGE_SCN_CNT_INITIALIZED_DATA);
+
+	text_rva_diff = mem_offset - file_offset;
+	text_rva = mem_offset;
+	emit_pecoff_section(".text", text_size, 0,
+			    &file_offset, &mem_offset, SCN_RX |
+			    IMAGE_SCN_CNT_CODE);
+
+	/* Check that kernel sections mapping is contiguous */
+	if (text_rva_diff != mem_offset - file_offset)
+		die("Kernel sections mapping is wrong: %#x != %#x",
+		    mem_offset - file_offset, text_rva_diff);
+
+	emit_pecoff_section(".data", file_size - text_size, bss_size,
+			    &file_offset, &mem_offset, SCN_RW |
+			    IMAGE_SCN_CNT_INITIALIZED_DATA);
+
+	reloc_offset = file_offset;
+	reloc_rva = mem_offset;
+	emit_pecoff_section(".reloc", reloc_size, 0,
+			    &file_offset, &mem_offset, SCN_RW |
+			    IMAGE_SCN_CNT_INITIALIZED_DATA |
+			    IMAGE_SCN_MEM_DISCARDABLE);
+
+	compat_offset = file_offset;
 #ifdef CONFIG_EFI_MIXED
-	update_pecoff_section_header(".compat", compat_offset, PECOFF_COMPAT_RESERVE);
+	emit_pecoff_section(".comat", compat_size, 0,
+			    &file_offset, &mem_offset, SCN_RW |
+			    IMAGE_SCN_CNT_INITIALIZED_DATA |
+			    IMAGE_SCN_MEM_DISCARDABLE);
+#endif
 
+	if (file_size + setup_size + reloc_size + compat_size != file_offset)
+		die("file_size(%#x) != filesz(%#x)",
+		    file_size + setup_size + reloc_size + compat_size, file_offset);
+
+	/* Size of code. */
+	put_unaligned_le32(round_up(text_size, SECTION_ALIGNMENT), &opt_hdr->text_size);
 	/*
-	 * Put the IA-32 machine type (0x14c) and the associated entry point
-	 * address in the .compat section, so loaders can figure out which other
-	 * execution modes this image supports.
+	 * Size of data.
+	 * Exclude text size and first sector, which contains PE header.
 	 */
-	buf[compat_offset] = 0x1;
-	buf[compat_offset + 1] = 0x8;
-	put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset + 2]);
-	put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset + 4]);
-#endif
-}
+	put_unaligned_le32(mem_offset - round_up(text_size, SECTION_ALIGNMENT),
+			   &opt_hdr->data_size);
 
-static unsigned int update_pecoff_sections(unsigned int text_start, unsigned int text_sz,
-			       unsigned int init_sz)
-{
-	unsigned int file_sz = text_start + text_sz;
-	unsigned int bss_sz = init_sz - file_sz;
-	pe_opt_hdr *hdr = get_pe_opt_header(buf);
+	/* Size of image. */
+	put_unaligned_le32(mem_offset, &opt_hdr->image_size);
 
 	/*
-	 * The PE/COFF loader may load the image at an address which is
-	 * misaligned with respect to the kernel_alignment field in the setup
-	 * header.
-	 *
-	 * In order to avoid relocating the kernel to correct the misalignment,
-	 * add slack to allow the buffer to be aligned within the declared size
-	 * of the image.
+	 * Address of entry point for PE/COFF executable
 	 */
-	bss_sz	+= CONFIG_PHYSICAL_ALIGN;
-	init_sz	+= CONFIG_PHYSICAL_ALIGN;
+	put_unaligned_le32(text_rva + efi_pe_entry, &opt_hdr->entry_point);
 
 	/*
-	 * Size of code: Subtract the size of the first sector (512 bytes)
-	 * which includes the header.
+	 * BaseOfCode for PE/COFF executable
 	 */
-	put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz, &hdr->text_size);
-
-	/* Size of image */
-	put_unaligned_le32(init_sz, &hdr->image_size);
+	put_unaligned_le32(text_rva, &opt_hdr->code_base);
 
 	/*
-	 * Address of entry point for PE/COFF executable
+	 * Since we have generated .reloc section, we need to
+	 * fill-in Reloc directory
 	 */
-	put_unaligned_le32(text_start + efi_pe_entry, &hdr->entry_point);
+	base_reloc = &get_data_dirs(buf)->base_relocations;
+	put_unaligned_le32(reloc_rva, &base_reloc->virtual_address);
+	put_unaligned_le32(RELOC_SECTION_SIZE, &base_reloc->size);
 
-	update_pecoff_section_header_fields(".text", text_start, text_sz + bss_sz,
-					    text_sz, text_start);
-
-	return text_start + file_sz;
+	return file_offset;
 }
 
-static int reserve_pecoff_reloc_section(int c)
+static void generate_pecoff_section_data(uint8_t *output)
 {
-	/* Reserve 0x20 bytes for .reloc section */
-	memset(buf+c, 0, PECOFF_RELOC_RESERVE);
-	return PECOFF_RELOC_RESERVE;
+	/*
+	 * Modify .reloc section contents with a two entries. The
+	 * relocation is applied to offset 10 of the relocation section.
+	 */
+	put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, &output[reloc_offset]);
+	put_unaligned_le32(RELOC_SECTION_SIZE, &output[reloc_offset + 4]);
+
+#ifdef CONFIG_EFI_MIXED
+	/*
+	 * Put the IA-32 machine type (0x14c) and the associated entry point
+	 * address in the .compat section, so loaders can figure out which other
+	 * execution modes this image supports.
+	 */
+	output[compat_offset] = 0x1;
+	output[compat_offset + 1] = 0x8;
+	put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &output[compat_offset + 2]);
+	put_unaligned_le32(efi32_pe_entry + text_rva, &output[compat_offset + 4]);
+#endif
 }
 
 static void efi_stub_update_defaults(void)
@@ -407,26 +465,10 @@ static void efi_stub_entry_update(void)
 
 #else
 
-static inline void update_pecoff_setup_and_reloc(unsigned int size) {}
-static inline void update_pecoff_text(unsigned int text_start,
-				      unsigned int file_sz,
-				      unsigned int init_sz) {}
-static inline void efi_stub_update_defaults(void) {}
-static inline void efi_stub_entry_update(void) {}
+static void efi_stub_update_defaults(void) {}
 
-static inline int reserve_pecoff_reloc_section(int c)
-{
-	return 0;
-}
 #endif /* CONFIG_EFI_STUB */
 
-static int reserve_pecoff_compat_section(int c)
-{
-	/* Reserve 0x20 bytes for .compat section */
-	memset(buf+c, 0, PECOFF_COMPAT_RESERVE);
-	return PECOFF_COMPAT_RESERVE;
-}
-
 /*
  * Parse zoffset.h and find the entry points. We could just #include zoffset.h
  * but that would mean tools/build would have to be rebuilt every time. It's
@@ -456,6 +498,7 @@ static void parse_zoffset(char *fname)
 		PARSE_ZOFS(p, efi32_pe_entry);
 		PARSE_ZOFS(p, kernel_info);
 		PARSE_ZOFS(p, startup_64);
+		PARSE_ZOFS(p, _rodata);
 		PARSE_ZOFS(p, _ehead);
 		PARSE_ZOFS(p, _end);
 
@@ -489,10 +532,6 @@ static unsigned int read_setup(char *path)
 
 	fclose(file);
 
-	/* Reserve space for PE sections */
-	file_size += reserve_pecoff_compat_section(file_size);
-	file_size += reserve_pecoff_reloc_section(file_size);
-
 	/* Pad unused space with zeros */
 
 	setup_size = round_up(file_size, SECTOR_SIZE);
@@ -515,7 +554,6 @@ int main(int argc, char **argv)
 	size_t kern_file_size;
 	unsigned int setup_size;
 	unsigned int setup_sectors;
-	unsigned int init_size;
 	unsigned int total_size;
 	unsigned int kern_size;
 	void *kernel;
@@ -540,8 +578,7 @@ int main(int argc, char **argv)
 
 #ifdef CONFIG_EFI_STUB
 	/* PE specification require 512-byte minimum section file alignment */
-	kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
-	update_pecoff_setup_and_reloc(setup_size);
+	kern_size = round_up(kern_file_size + 4, FILE_ALIGNMENT);
 #else
 	/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
 	kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
@@ -554,33 +591,12 @@ int main(int argc, char **argv)
 	/* Update kernel_info offset. */
 	put_unaligned_le32(kernel_info, &buf[0x268]);
 
-	init_size = get_unaligned_le32(&buf[0x260]);
-
 #ifdef CONFIG_EFI_STUB
-	/*
-	 * The decompression buffer will start at ImageBase. When relocating
-	 * the compressed kernel to its end, we must ensure that the head
-	 * section does not get overwritten.  The head section occupies
-	 * [i, i + _ehead), and the destination is [init_sz - _end, init_sz).
-	 *
-	 * At present these should never overlap, because 'i' is at most 32k
-	 * because of SETUP_SECT_MAX, '_ehead' is less than 1k, and the
-	 * calculation of INIT_SIZE in boot/header.S ensures that
-	 * 'init_sz - _end' is at least 64k.
-	 *
-	 * For future-proofing, increase init_sz if necessary.
-	 */
-
-	if (init_size - _end < setup_size + _ehead) {
-		init_size = round_up(setup_size + _ehead + _end, SECTION_ALIGNMENT);
-		put_unaligned_le32(init_size, &buf[0x260]);
-	}
 
-	total_size = update_pecoff_sections(setup_size, kern_size, init_size);
+	total_size = update_pecoff_sections(setup_size, kern_size, _end, _rodata);
 
 	efi_stub_entry_update();
 #else
-	(void)init_size;
 	total_size = setup_size + kern_size;
 #endif
 
@@ -590,6 +606,10 @@ int main(int argc, char **argv)
 	memcpy(output + setup_size, kernel, kern_file_size);
 	memset(output + setup_size + kern_file_size, 0, kern_size - kern_file_size);
 
+#ifdef CONFIG_EFI_STUB
+	generate_pecoff_section_data(output);
+#endif
+
 	/* Calculate and write kernel checksum. */
 	crc = partial_crc32(output, total_size - 4, crc);
 	put_unaligned_le32(crc, &output[total_size - 4]);
diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index 1d1ab1911fd3..1f0a2e7075c3 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -389,8 +389,11 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
 
 	hdr = &boot_params->hdr;
 
-	/* Copy the setup header from the second sector to boot_params */
-	memcpy(&hdr->jump, image_base + 512,
+	/*
+	 * Copy the setup header from the second sector
+	 * (mapped to image_base + 0x1000) to boot_params
+	 */
+	memcpy(&hdr->jump, image_base + 0x1000,
 	       sizeof(struct setup_header) - offsetof(struct setup_header, jump));
 
 	/*
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (19 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 20/26] x86/build: Make generated PE more spec compliant Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 15:20   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 22/26] efi/libstub: Add memory attribute protocol definitions Evgeniy Baskov
                   ` (5 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Explicitly change sections memory attributes in efi_pe_entry in case
of incorrect EFI implementations and to reduce access rights to
compressed kernel blob. By default it is set executable due to
restriction in maximum number of sections that can fit before zero
page.

Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 drivers/firmware/efi/libstub/x86-stub.c | 54 +++++++++++++++++++++++++
 1 file changed, 54 insertions(+)

diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index 1f0a2e7075c3..60697fcd8950 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -27,6 +27,12 @@ const efi_dxe_services_table_t *efi_dxe_table;
 u32 image_offset __section(".data");
 static efi_loaded_image_t *image __section(".data");
 
+extern char _head[], _ehead[];
+extern char _compressed[], _ecompressed[];
+extern char _text[], _etext[];
+extern char _rodata[], _erodata[];
+extern char _data[];
+
 static efi_status_t
 preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom)
 {
@@ -343,6 +349,52 @@ void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
 		asm("hlt");
 }
 
+
+/*
+ * Manually setup memory protection attributes for each ELF section
+ * since we cannot do it properly by using PE sections.
+ */
+static void setup_sections_memory_protection(unsigned long image_base)
+{
+#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
+	efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
+
+	if (!efi_dxe_table ||
+	    efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
+		efi_warn("Unable to locate EFI DXE services table\n");
+		efi_dxe_table = NULL;
+		return;
+	}
+
+	/* .setup [image_base, _head] */
+	efi_adjust_memory_range_protection(image_base,
+					   (unsigned long)_head - image_base,
+					   EFI_MEMORY_RO | EFI_MEMORY_XP);
+	/* .head.text [_head, _ehead] */
+	efi_adjust_memory_range_protection((unsigned long)_head,
+					   (unsigned long)_ehead - (unsigned long)_head,
+					   EFI_MEMORY_RO);
+	/* .rodata..compressed [_compressed, _ecompressed] */
+	efi_adjust_memory_range_protection((unsigned long)_compressed,
+					   (unsigned long)_ecompressed - (unsigned long)_compressed,
+					   EFI_MEMORY_RO | EFI_MEMORY_XP);
+	/* .text [_text, _etext] */
+	efi_adjust_memory_range_protection((unsigned long)_text,
+					   (unsigned long)_etext - (unsigned long)_text,
+					   EFI_MEMORY_RO);
+	/* .rodata [_rodata, _erodata] */
+	efi_adjust_memory_range_protection((unsigned long)_rodata,
+					   (unsigned long)_erodata - (unsigned long)_rodata,
+					   EFI_MEMORY_RO | EFI_MEMORY_XP);
+	/* .data, .bss [_data, _end] */
+	efi_adjust_memory_range_protection((unsigned long)_data,
+					   (unsigned long)_end - (unsigned long)_data,
+					   EFI_MEMORY_XP);
+#else
+	(void)image_base;
+#endif
+}
+
 void __noreturn efi_stub_entry(efi_handle_t handle,
 			       efi_system_table_t *sys_table_arg,
 			       struct boot_params *boot_params);
@@ -687,6 +739,8 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
 		efi_dxe_table = NULL;
 	}
 
+	setup_sections_memory_protection(bzimage_addr - image_offset);
+
 #ifdef CONFIG_CMDLINE_BOOL
 	status = efi_parse_options(CONFIG_CMDLINE);
 	if (status != EFI_SUCCESS) {
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 22/26] efi/libstub: Add memory attribute protocol definitions
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (20 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 12:38 ` [PATCH v4 23/26] efi/libstub: Use memory attribute protocol Evgeniy Baskov
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

EFI_MEMORY_ATTRIBUTE_PROTOCOL servers as a better alternative to
DXE services for setting memory attributes in EFI Boot Services
environment. This protocol is better since it is a part of UEFI
specification itself and not UEFI PI specification like DXE
services.

Add EFI_MEMORY_ATTRIBUTE_PROTOCOL definitions.
Support mixed mode properly for its calls.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 arch/x86/include/asm/efi.h             |  7 +++++++
 drivers/firmware/efi/libstub/efistub.h | 22 ++++++++++++++++++++++
 include/linux/efi.h                    |  1 +
 3 files changed, 30 insertions(+)

diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index a63154e049d7..cd19b9eca3f6 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -335,6 +335,13 @@ static inline u32 efi64_convert_status(efi_status_t status)
 #define __efi64_argmap_open_volume(prot, file) \
 	((prot), efi64_zero_upper(file))
 
+/* Memory Attribute Protocol */
+#define __efi64_argmap_set_memory_attributes(protocol, phys, size, flags) \
+	((protocol), __efi64_split(phys), __efi64_split(size), __efi64_split(flags))
+
+#define __efi64_argmap_clear_memory_attributes(protocol, phys, size, flags) \
+	((protocol), __efi64_split(phys), __efi64_split(size), __efi64_split(flags))
+
 /*
  * The macros below handle the plumbing for the argument mapping. To add a
  * mapping for a specific EFI method, simply define a macro
diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
index c55325f829e7..cd8a7b089b7d 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -43,6 +43,9 @@ extern const efi_system_table_t *efi_system_table;
 typedef union efi_dxe_services_table efi_dxe_services_table_t;
 extern const efi_dxe_services_table_t *efi_dxe_table;
 
+typedef union efi_memory_attribute_protocol efi_memory_attribute_protocol_t;
+extern efi_memory_attribute_protocol_t *efi_mem_attrib_proto;
+
 efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
 				   efi_system_table_t *sys_table_arg);
 
@@ -442,6 +445,25 @@ union efi_dxe_services_table {
 	} mixed_mode;
 };
 
+union  efi_memory_attribute_protocol {
+	struct {
+		void *get_memory_attributes;
+		efi_status_t (__efiapi *set_memory_attributes)(efi_memory_attribute_protocol_t *,
+								efi_physical_addr_t,
+								u64,
+								u64);
+		efi_status_t (__efiapi *clear_memory_attributes)(efi_memory_attribute_protocol_t *,
+								  efi_physical_addr_t,
+								  u64,
+								  u64);
+	};
+	struct {
+		u32 get_memory_attributes;
+		u32 set_memory_attributes;
+		u32 clear_memory_attributes;
+	} mixed_mode;
+};
+
 typedef union efi_uga_draw_protocol efi_uga_draw_protocol_t;
 
 union efi_uga_draw_protocol {
diff --git a/include/linux/efi.h b/include/linux/efi.h
index 4b27519143f5..8a333d993829 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -391,6 +391,7 @@ void efi_native_runtime_setup(void);
 #define EFI_RT_PROPERTIES_TABLE_GUID		EFI_GUID(0xeb66918a, 0x7eef, 0x402a,  0x84, 0x2e, 0x93, 0x1d, 0x21, 0xc3, 0x8a, 0xe9)
 #define EFI_DXE_SERVICES_TABLE_GUID		EFI_GUID(0x05ad34ba, 0x6f02, 0x4214,  0x95, 0x2e, 0x4d, 0xa0, 0x39, 0x8e, 0x2b, 0xb9)
 #define EFI_SMBIOS_PROTOCOL_GUID		EFI_GUID(0x03583ff6, 0xcb36, 0x4940,  0x94, 0x7e, 0xb9, 0xb3, 0x9f, 0x4a, 0xfa, 0xf7)
+#define EFI_MEMORY_ATTRIBUTE_PROTOCOL_GUID	EFI_GUID(0xf4560cf6, 0x40ec, 0x4b4a,  0xa1, 0x92, 0xbf, 0x1d, 0x57, 0xd0, 0xb1, 0x89)
 
 #define EFI_IMAGE_SECURITY_DATABASE_GUID	EFI_GUID(0xd719b2cb, 0x3d3a, 0x4596,  0xa3, 0xbc, 0xda, 0xd0, 0x0e, 0x67, 0x65, 0x6f)
 #define EFI_SHIM_LOCK_GUID			EFI_GUID(0x605dab50, 0xe046, 0x4300,  0xab, 0xb6, 0x3d, 0xd8, 0x10, 0xdd, 0x8b, 0x23)
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 23/26] efi/libstub: Use memory attribute protocol
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (21 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 22/26] efi/libstub: Add memory attribute protocol definitions Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2023-03-10 16:13   ` Ard Biesheuvel
  2022-12-15 12:38 ` [PATCH v4 24/26] efi/libstub: make memory protection warnings include newlines Evgeniy Baskov
                   ` (3 subsequent siblings)
  26 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Evgeniy Baskov, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Peter Jones, Limonciello, Mario, joeyli, lvc-project, x86,
	linux-efi, linux-kernel, linux-hardening

Add EFI_MEMORY_ATTRIBUTE_PROTOCOL as preferred alternative to DXE
services for changing memory attributes in the EFISTUB.

Use DXE services only as a fallback in case aforementioned protocol
is not supported by UEFI implementation.

Move DXE services initialization code closer to the place they are used
to match EFI_MEMORY_ATTRIBUTE_PROTOCOL initialization code.

Tested-by: Mario Limonciello <mario.limonciello@amd.com>
Tested-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
---
 drivers/firmware/efi/libstub/mem.c      | 168 ++++++++++++++++++------
 drivers/firmware/efi/libstub/x86-stub.c |  17 ---
 2 files changed, 128 insertions(+), 57 deletions(-)

diff --git a/drivers/firmware/efi/libstub/mem.c b/drivers/firmware/efi/libstub/mem.c
index 3e47e5931f04..07d54c88c62e 100644
--- a/drivers/firmware/efi/libstub/mem.c
+++ b/drivers/firmware/efi/libstub/mem.c
@@ -5,6 +5,9 @@
 
 #include "efistub.h"
 
+const efi_dxe_services_table_t *efi_dxe_table;
+efi_memory_attribute_protocol_t *efi_mem_attrib_proto;
+
 /**
  * efi_get_memory_map() - get memory map
  * @map:		pointer to memory map pointer to which to assign the
@@ -129,66 +132,47 @@ void efi_free(unsigned long size, unsigned long addr)
 	efi_bs_call(free_pages, addr, nr_pages);
 }
 
-/**
- * efi_adjust_memory_range_protection() - change memory range protection attributes
- * @start:	memory range start address
- * @size:	memory range size
- *
- * Actual memory range for which memory attributes are modified is
- * the smallest ranged with start address and size aligned to EFI_PAGE_SIZE
- * that includes [start, start + size].
- *
- * @return: status code
- */
-efi_status_t efi_adjust_memory_range_protection(unsigned long start,
-						unsigned long size,
-						unsigned long attributes)
+static void retrieve_dxe_table(void)
+{
+	efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
+	if (efi_dxe_table &&
+	    efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
+		efi_warn("Ignoring DXE services table: invalid signature\n");
+		efi_dxe_table = NULL;
+	}
+}
+
+static efi_status_t adjust_mem_attrib_dxe(efi_physical_addr_t rounded_start,
+					  efi_physical_addr_t rounded_end,
+					  unsigned long attributes)
 {
 	efi_status_t status;
 	efi_gcd_memory_space_desc_t desc;
-	efi_physical_addr_t end, next;
-	efi_physical_addr_t rounded_start, rounded_end;
+	efi_physical_addr_t end, next, start;
 	efi_physical_addr_t unprotect_start, unprotect_size;
 
-	if (efi_dxe_table == NULL)
-		return EFI_UNSUPPORTED;
+	if (!efi_dxe_table) {
+		retrieve_dxe_table();
 
-	/*
-	 * This function should not be used to modify attributes
-	 * other than writable/executable.
-	 */
-
-	if ((attributes & ~(EFI_MEMORY_RO | EFI_MEMORY_XP)) != 0)
-		return EFI_INVALID_PARAMETER;
-
-	/*
-	 * Disallow simultaniously executable and writable memory
-	 * to inforce W^X policy if direct extraction code is enabled.
-	 */
-
-	if ((attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0) {
-		efi_warn("W^X violation at [%08lx,%08lx]\n",
-			 (unsigned long)rounded_start,
-			 (unsigned long)rounded_end);
+		if (!efi_dxe_table)
+			return EFI_UNSUPPORTED;
 	}
 
-	rounded_start = rounddown(start, EFI_PAGE_SIZE);
-	rounded_end = roundup(start + size, EFI_PAGE_SIZE);
-
 	/*
 	 * Don't modify memory region attributes, they are
 	 * already suitable, to lower the possibility to
 	 * encounter firmware bugs.
 	 */
 
-	for (end = start + size; start < end; start = next) {
+
+	for (start = rounded_start, end = rounded_end; start < end; start = next) {
 
 		status = efi_dxe_call(get_memory_space_descriptor,
 				      start, &desc);
 
 		if (status != EFI_SUCCESS) {
 			efi_warn("Unable to get memory descriptor at %lx\n",
-				 start);
+				 (unsigned long)start);
 			return status;
 		}
 
@@ -230,3 +214,107 @@ efi_status_t efi_adjust_memory_range_protection(unsigned long start,
 
 	return EFI_SUCCESS;
 }
+
+static void retrieve_memory_attributes_proto(void)
+{
+	efi_status_t status;
+	efi_guid_t guid = EFI_MEMORY_ATTRIBUTE_PROTOCOL_GUID;
+
+	status = efi_bs_call(locate_protocol, &guid, NULL,
+			     (void **)&efi_mem_attrib_proto);
+	if (status != EFI_SUCCESS)
+		efi_mem_attrib_proto = NULL;
+}
+
+/**
+ * efi_adjust_memory_range_protection() - change memory range protection attributes
+ * @start:	memory range start address
+ * @size:	memory range size
+ *
+ * Actual memory range for which memory attributes are modified is
+ * the smallest ranged with start address and size aligned to EFI_PAGE_SIZE
+ * that includes [start, start + size].
+ *
+ * This function first attempts to use EFI_MEMORY_ATTRIBUTE_PROTOCOL,
+ * that is a part of UEFI Specification since version 2.10.
+ * If the protocol is unavailable it falls back to DXE services functions.
+ *
+ * @return: status code
+ */
+efi_status_t efi_adjust_memory_range_protection(unsigned long start,
+						unsigned long size,
+						unsigned long attributes)
+{
+	efi_status_t status;
+	efi_physical_addr_t rounded_start, rounded_end;
+	unsigned long attr_clear;
+
+	/*
+	 * This function should not be used to modify attributes
+	 * other than writable/executable.
+	 */
+
+	if ((attributes & ~(EFI_MEMORY_RO | EFI_MEMORY_XP)) != 0)
+		return EFI_INVALID_PARAMETER;
+
+	/*
+	 * Warn if requested to make memory simultaneously
+	 * executable and writable to enforce W^X policy.
+	 */
+
+	if ((attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0) {
+		efi_warn("W^X violation at  [%08lx,%08lx]",
+			 (unsigned long)rounded_start,
+			 (unsigned long)rounded_end);
+	}
+
+	rounded_start = rounddown(start, EFI_PAGE_SIZE);
+	rounded_end = roundup(start + size, EFI_PAGE_SIZE);
+
+	if (!efi_mem_attrib_proto) {
+		retrieve_memory_attributes_proto();
+
+		/* Fall back to DXE services if unsupported */
+		if (!efi_mem_attrib_proto) {
+			return adjust_mem_attrib_dxe(rounded_start,
+						     rounded_end,
+						     attributes);
+		}
+	}
+
+	/*
+	 * Unlike DXE services functions, EFI_MEMORY_ATTRIBUTE_PROTOCOL
+	 * does not clear unset protection bit, so it needs to be cleared
+	 * explcitly
+	 */
+
+	attr_clear = ~attributes &
+		     (EFI_MEMORY_RO | EFI_MEMORY_XP | EFI_MEMORY_RP);
+
+	status = efi_call_proto(efi_mem_attrib_proto,
+				clear_memory_attributes,
+				rounded_start,
+				rounded_end - rounded_start,
+				attr_clear);
+	if (status != EFI_SUCCESS) {
+		efi_warn("Failed to clear memory attributes at [%08lx,%08lx]: %lx",
+			 (unsigned long)rounded_start,
+			 (unsigned long)rounded_end,
+			 status);
+		return status;
+	}
+
+	status = efi_call_proto(efi_mem_attrib_proto,
+				set_memory_attributes,
+				rounded_start,
+				rounded_end - rounded_start,
+				attributes);
+	if (status != EFI_SUCCESS) {
+		efi_warn("Failed to set memory attributes at [%08lx,%08lx]: %lx",
+			 (unsigned long)rounded_start,
+			 (unsigned long)rounded_end,
+			 status);
+	}
+
+	return status;
+}
diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index 60697fcd8950..06a62b121521 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -23,7 +23,6 @@
 #define MAXMEM_X86_64_4LEVEL (1ull << 46)
 
 const efi_system_table_t *efi_system_table;
-const efi_dxe_services_table_t *efi_dxe_table;
 u32 image_offset __section(".data");
 static efi_loaded_image_t *image __section(".data");
 
@@ -357,15 +356,6 @@ void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
 static void setup_sections_memory_protection(unsigned long image_base)
 {
 #ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
-	efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
-
-	if (!efi_dxe_table ||
-	    efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
-		efi_warn("Unable to locate EFI DXE services table\n");
-		efi_dxe_table = NULL;
-		return;
-	}
-
 	/* .setup [image_base, _head] */
 	efi_adjust_memory_range_protection(image_base,
 					   (unsigned long)_head - image_base,
@@ -732,13 +722,6 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
 	if (efi_system_table->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE)
 		efi_exit(handle, EFI_INVALID_PARAMETER);
 
-	efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
-	if (efi_dxe_table &&
-	    efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
-		efi_warn("Ignoring DXE services table: invalid signature\n");
-		efi_dxe_table = NULL;
-	}
-
 	setup_sections_memory_protection(bzimage_addr - image_offset);
 
 #ifdef CONFIG_CMDLINE_BOOL
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 24/26] efi/libstub: make memory protection warnings include newlines.
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (22 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 23/26] efi/libstub: Use memory attribute protocol Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 12:38 ` [PATCH v4 25/26] efi/x86: don't try to set page attributes on 0-sized regions Evgeniy Baskov
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Peter Jones, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

From: Peter Jones <pjones@redhat.com>

efi_warn() doesn't put newlines on messages, and that makes reading
warnings without newlines hard to do.

Signed-off-by: Peter Jones <pjones@redhat.com>
---
 drivers/firmware/efi/libstub/mem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/firmware/efi/libstub/mem.c b/drivers/firmware/efi/libstub/mem.c
index 07d54c88c62e..b31d1975caa2 100644
--- a/drivers/firmware/efi/libstub/mem.c
+++ b/drivers/firmware/efi/libstub/mem.c
@@ -297,7 +297,7 @@ efi_status_t efi_adjust_memory_range_protection(unsigned long start,
 				rounded_end - rounded_start,
 				attr_clear);
 	if (status != EFI_SUCCESS) {
-		efi_warn("Failed to clear memory attributes at [%08lx,%08lx]: %lx",
+		efi_warn("Failed to clear memory attributes at [%08lx,%08lx]: %lx\n",
 			 (unsigned long)rounded_start,
 			 (unsigned long)rounded_end,
 			 status);
@@ -310,7 +310,7 @@ efi_status_t efi_adjust_memory_range_protection(unsigned long start,
 				rounded_end - rounded_start,
 				attributes);
 	if (status != EFI_SUCCESS) {
-		efi_warn("Failed to set memory attributes at [%08lx,%08lx]: %lx",
+		efi_warn("Failed to set memory attributes at [%08lx,%08lx]: %lx\n",
 			 (unsigned long)rounded_start,
 			 (unsigned long)rounded_end,
 			 status);
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 25/26] efi/x86: don't try to set page attributes on 0-sized regions.
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (23 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 24/26] efi/libstub: make memory protection warnings include newlines Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 12:38 ` [PATCH v4 26/26] efi/x86: don't set unsupported memory attributes Evgeniy Baskov
  2022-12-15 19:21 ` [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Peter Jones
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Peter Jones, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

From: Peter Jones <pjones@redhat.com>

In "efi/x86: Explicitly set sections memory attributes", the following
region is defined to help compute page permissions:

          /* .setup [image_base, _head] */
          efi_adjust_memory_range_protection(image_base,
                                             (unsigned long)_head - image_base,
                                             EFI_MEMORY_RO | EFI_MEMORY_XP);

In at least some cases, that will result in a size of 0, which will
produce an error and a message on the console, though no actual failure
will be caused in the boot process.

This patch checks that case in efi_adjust_memory_range_protection() and
returns the error without logging.

Signed-off-by: Peter Jones <pjones@redhat.com>
---
 drivers/firmware/efi/libstub/mem.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/firmware/efi/libstub/mem.c b/drivers/firmware/efi/libstub/mem.c
index b31d1975caa2..50a0b649b75a 100644
--- a/drivers/firmware/efi/libstub/mem.c
+++ b/drivers/firmware/efi/libstub/mem.c
@@ -249,6 +249,9 @@ efi_status_t efi_adjust_memory_range_protection(unsigned long start,
 	efi_physical_addr_t rounded_start, rounded_end;
 	unsigned long attr_clear;
 
+	if (size == 0)
+		return EFI_INVALID_PARAMETER;
+
 	/*
 	 * This function should not be used to modify attributes
 	 * other than writable/executable.
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [PATCH v4 26/26] efi/x86: don't set unsupported memory attributes
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (24 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 25/26] efi/x86: don't try to set page attributes on 0-sized regions Evgeniy Baskov
@ 2022-12-15 12:38 ` Evgeniy Baskov
  2022-12-15 19:21 ` [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Peter Jones
  26 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-15 12:38 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Peter Jones, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

From: Peter Jones <pjones@redhat.com>

On platforms where the firmware uses DXE, but which do not implement the
EFI Memory Attribute Protocol, we implement W^X support using DXE's
set_memory_attributes() call.  This call will fail without making any
changes if an attribute is set that isn't supported on the platform.

This patch changes efi_adjust_memory_range_protection() to avoid trying
to set any attribute bits that aren't set in the memory region's
capability flags.

Signed-off-by: Peter Jones <pjones@redhat.com>
---
 drivers/firmware/efi/libstub/mem.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/firmware/efi/libstub/mem.c b/drivers/firmware/efi/libstub/mem.c
index 50a0b649b75a..b86ea2920d5e 100644
--- a/drivers/firmware/efi/libstub/mem.c
+++ b/drivers/firmware/efi/libstub/mem.c
@@ -195,6 +195,7 @@ static efi_status_t adjust_mem_attrib_dxe(efi_physical_addr_t rounded_start,
 
 		desc.attributes &= ~(EFI_MEMORY_RO | EFI_MEMORY_XP);
 		desc.attributes |= attributes;
+		desc.attributes &= desc.capabilities;
 
 		unprotect_start = max(rounded_start, desc.base_address);
 		unprotect_size = min(rounded_end, next) - unprotect_start;
-- 
2.37.4


^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage
  2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
                   ` (25 preceding siblings ...)
  2022-12-15 12:38 ` [PATCH v4 26/26] efi/x86: don't set unsupported memory attributes Evgeniy Baskov
@ 2022-12-15 19:21 ` Peter Jones
  2022-12-19 14:08   ` Evgeniy Baskov
  26 siblings, 1 reply; 78+ messages in thread
From: Peter Jones @ 2022-12-15 19:21 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Ard Biesheuvel, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, Dec 15, 2022 at 03:37:51PM +0300, Evgeniy Baskov wrote:
> This patchset is aimed
> * to improve UEFI compatibility of compressed kernel code for x86_64
> * to setup proper memory access attributes for code and rodata sections
> * to implement W^X protection policy throughout the whole execution 
>   of compressed kernel for EFISTUB code path. 

Hi Evgeniy,

Aside from some minor patch fuzz in patch 6 due to building this in
today's Fedora rawhide kernel rather than mainline, this patch set works
for me.

Thanks!

-- 
        Peter


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage
  2022-12-15 19:21 ` [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Peter Jones
@ 2022-12-19 14:08   ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2022-12-19 14:08 UTC (permalink / raw)
  To: Peter Jones
  Cc: Ard Biesheuvel, Borislav Petkov, Andy Lutomirski, Dave Hansen,
	Ingo Molnar, Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2022-12-15 22:21, Peter Jones wrote:
> On Thu, Dec 15, 2022 at 03:37:51PM +0300, Evgeniy Baskov wrote:
>> This patchset is aimed
>> * to improve UEFI compatibility of compressed kernel code for x86_64
>> * to setup proper memory access attributes for code and rodata 
>> sections
>> * to implement W^X protection policy throughout the whole execution
>>   of compressed kernel for EFISTUB code path.
> 
> Hi Evgeniy,
> 
> Aside from some minor patch fuzz in patch 6 due to building this in
> today's Fedora rawhide kernel rather than mainline, this patch set 
> works
> for me.
> 
> Thanks!

Nice to hear that, thank you for testing again!

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 04/26] x86/boot: Increase boot page table size
  2022-12-15 12:37 ` [PATCH v4 04/26] x86/boot: Increase boot page table size Evgeniy Baskov
@ 2023-03-08  9:24   ` Ard Biesheuvel
  0 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-08  9:24 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Previous upper limit ignored pages implicitly mapped from #PF handler
> by code accessing ACPI tables (boot/compressed/{acpi.c,efi.c}),
> so theoretical upper limit is higher than it was set.
>
> Using 4KB pages is desirable for better memory protection granularity.
> Approximately twice as much memory is required for those.
>
> Increase initial page table size to 64 4KB page tables.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/include/asm/boot.h | 26 ++++++++++++++------------
>  1 file changed, 14 insertions(+), 12 deletions(-)
>
> diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
> index 9191280d9ea3..024d972c248e 100644
> --- a/arch/x86/include/asm/boot.h
> +++ b/arch/x86/include/asm/boot.h
> @@ -41,22 +41,24 @@
>  # define BOOT_STACK_SIZE       0x4000
>
>  # define BOOT_INIT_PGT_SIZE    (6*4096)
> -# ifdef CONFIG_RANDOMIZE_BASE
>  /*
>   * Assuming all cross the 512GB boundary:
>   * 1 page for level4
> - * (2+2)*4 pages for kernel, param, cmd_line, and randomized kernel
> - * 2 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
> - * Total is 19 pages.
> + * (3+3)*2 pages for param and cmd_line
> + * (2+2+S)*2 pages for kernel and randomized kernel, where S is total number
> + *     of sections of kernel. Explanation: 2+2 are upper level page tables.
> + *     We can have only S unaligned parts of section: 1 at the end of the kernel
> + *     and (S-1) at the section borders. The start address of the kernel is
> + *     aligned, so an extra page table. There are at most S=6 sections in
> + *     vmlinux ELF image.
> + * 3 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
> + * Total is 36 pages.
> + *
> + * Some pages are also required for UEFI memory map and
> + * ACPI table mappings, so we need to add extra space.
> + * FIXME: Figure out exact amount of pages.

So you are rounding up 36 to 64 to account for these pages, right?

So we should either drop the FIXME and explain that this is fine, or
fix it - we cannot merge it like this.

Thanks,
Ard.

>   */
> -#  ifdef CONFIG_X86_VERBOSE_BOOTUP
> -#   define BOOT_PGT_SIZE       (19*4096)
> -#  else /* !CONFIG_X86_VERBOSE_BOOTUP */
> -#   define BOOT_PGT_SIZE       (17*4096)
> -#  endif
> -# else /* !CONFIG_RANDOMIZE_BASE */
> -#  define BOOT_PGT_SIZE                BOOT_INIT_PGT_SIZE
> -# endif
> +# define BOOT_PGT_SIZE         (64*4096)
>
>  #else /* !CONFIG_X86_64 */
>  # define BOOT_STACK_SIZE       0x1000
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 07/26] x86/build: Check W^X of vmlinux during build
  2022-12-15 12:37 ` [PATCH v4 07/26] x86/build: Check W^X of vmlinux during build Evgeniy Baskov
@ 2023-03-08  9:34   ` Ard Biesheuvel
  2023-03-08 16:05     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-08  9:34 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Check if there are simultaneously writable and executable
> program segments in vmlinux ELF image and fail build if there are any.
>
> This would prevent accidental introduction of RWX segments.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/boot/compressed/Makefile | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
> index 1acff356d97a..4dcab38f5a38 100644
> --- a/arch/x86/boot/compressed/Makefile
> +++ b/arch/x86/boot/compressed/Makefile
> @@ -112,11 +112,17 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
>  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
>  vmlinux-objs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a
>
> +quiet_cmd_wx_check = WXCHK   $<
> +cmd_wx_check = if $(OBJDUMP) -p $< | grep "flags .wx" > /dev/null; \
> +              then (echo >&2 "$<: Simultaneously writable and executable sections are prohibited"; \
> +                    /bin/false); fi
> +
>  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
>         $(call if_changed,ld)
>
>  OBJCOPYFLAGS_vmlinux.bin :=  -R .comment -S
>  $(obj)/vmlinux.bin: vmlinux FORCE
> +       $(call cmd,wx_check)

This breaks the way we track dependencies between make targets: the
FORCE will result in the check being performed every time, even if
nothing gets rebuilt.

Better to do something like the below (apologies for the alphabet soup)


--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -112,18 +112,17 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) +=
$(objtree)/drivers/firmware/efi/libstub/lib.a

-quiet_cmd_wx_check = WXCHK   $<
-cmd_wx_check = if $(OBJDUMP) -p $< | grep "flags .wx" > /dev/null; \
-              then (echo >&2 "$<: Simultaneously writable and
executable sections are prohibited"; \
-                    /bin/false); fi
+quiet_cmd_objcopy_and_wx_check = $(quiet_cmd_objcopy)
+      cmd_objcopy_and_wx_check = if $(OBJDUMP) -p $< | grep "flags
.wx" > /dev/null; then \
+                                       (echo >&2 "$<: Simultaneously
writable and executable sections are prohibited"; \
+                                       /bin/false); else $(cmd_objcopy); fi

 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
        $(call if_changed,ld)

 OBJCOPYFLAGS_vmlinux.bin :=  -R .comment -S
 $(obj)/vmlinux.bin: vmlinux FORCE
-       $(call cmd,wx_check)
-       $(call if_changed,objcopy)
+       $(call if_changed,objcopy_and_wx_check)

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 08/26] x86/boot: Map memory explicitly
  2022-12-15 12:37 ` [PATCH v4 08/26] x86/boot: Map memory explicitly Evgeniy Baskov
@ 2023-03-08  9:38   ` Ard Biesheuvel
  2023-03-08 10:28     ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-08  9:38 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Implicit mappings hide possible memory errors, e.g. allocations for
> ACPI tables were not included in boot page table size.
>
> Replace all implicit mappings from page fault handler with
> explicit mappings.
>

I agree with the motivation but this patch seems to break the boot
under SeaBIOS/QEMU, and I imagine other legacy BIOS boot scenarios as
well.

Naively, I would assume that there is simply a legacy BIOS region that
we fail to map here, but I am fairly clueless when it comes to non-EFI
x86 boot so take this with a grain of salt.


> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/boot/compressed/acpi.c  | 25 ++++++++++++++++++++++++-
>  arch/x86/boot/compressed/efi.c   | 19 ++++++++++++++++++-
>  arch/x86/boot/compressed/kaslr.c |  4 ++++
>  3 files changed, 46 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/acpi.c b/arch/x86/boot/compressed/acpi.c
> index 9caf89063e77..c775e01fc7db 100644
> --- a/arch/x86/boot/compressed/acpi.c
> +++ b/arch/x86/boot/compressed/acpi.c
> @@ -93,6 +93,8 @@ static u8 *scan_mem_for_rsdp(u8 *start, u32 length)
>
>         end = start + length;
>
> +       kernel_add_identity_map((unsigned long)start, (unsigned long)end, 0);
> +
>         /* Search from given start address for the requested length */
>         for (address = start; address < end; address += ACPI_RSDP_SCAN_STEP) {
>                 /*
> @@ -128,6 +130,9 @@ static acpi_physical_address bios_get_rsdp_addr(void)
>         unsigned long address;
>         u8 *rsdp;
>
> +       kernel_add_identity_map((unsigned long)ACPI_EBDA_PTR_LOCATION,
> +                               (unsigned long)ACPI_EBDA_PTR_LOCATION + 2, 0);
> +
>         /* Get the location of the Extended BIOS Data Area (EBDA) */
>         address = *(u16 *)ACPI_EBDA_PTR_LOCATION;
>         address <<= 4;
> @@ -215,6 +220,9 @@ static unsigned long get_acpi_srat_table(void)
>         if (!rsdp)
>                 return 0;
>
> +       kernel_add_identity_map((unsigned long)rsdp,
> +                               (unsigned long)(rsdp + 1), 0);
> +
>         /* Get ACPI root table from RSDP.*/
>         if (!(cmdline_find_option("acpi", arg, sizeof(arg)) == 4 &&
>             !strncmp(arg, "rsdt", 4)) &&
> @@ -231,10 +239,17 @@ static unsigned long get_acpi_srat_table(void)
>                 return 0;
>
>         header = (struct acpi_table_header *)root_table;
> +
> +       kernel_add_identity_map((unsigned long)header,
> +                               (unsigned long)(header + 1), 0);
> +
>         len = header->length;
>         if (len < sizeof(struct acpi_table_header) + size)
>                 return 0;
>
> +       kernel_add_identity_map((unsigned long)header,
> +                               (unsigned long)header + len, 0);
> +
>         num_entries = (len - sizeof(struct acpi_table_header)) / size;
>         entry = (u8 *)(root_table + sizeof(struct acpi_table_header));
>
> @@ -247,8 +262,16 @@ static unsigned long get_acpi_srat_table(void)
>                 if (acpi_table) {
>                         header = (struct acpi_table_header *)acpi_table;
>
> -                       if (ACPI_COMPARE_NAMESEG(header->signature, ACPI_SIG_SRAT))
> +                       kernel_add_identity_map(acpi_table,
> +                                               acpi_table + sizeof(*header),
> +                                               0);
> +
> +                       if (ACPI_COMPARE_NAMESEG(header->signature, ACPI_SIG_SRAT)) {
> +                               kernel_add_identity_map(acpi_table,
> +                                                       acpi_table + header->length,
> +                                                       0);
>                                 return acpi_table;
> +                       }
>                 }
>                 entry += size;
>         }
> diff --git a/arch/x86/boot/compressed/efi.c b/arch/x86/boot/compressed/efi.c
> index 6edd034b0b30..ce70103fbbc0 100644
> --- a/arch/x86/boot/compressed/efi.c
> +++ b/arch/x86/boot/compressed/efi.c
> @@ -57,10 +57,14 @@ enum efi_type efi_get_type(struct boot_params *bp)
>   */
>  unsigned long efi_get_system_table(struct boot_params *bp)
>  {
> -       unsigned long sys_tbl_pa;
> +       static unsigned long sys_tbl_pa __section(".data");
>         struct efi_info *ei;
> +       unsigned long sys_tbl_size;
>         enum efi_type et;
>
> +       if (sys_tbl_pa)
> +               return sys_tbl_pa;
> +
>         /* Get systab from boot params. */
>         ei = &bp->efi_info;
>  #ifdef CONFIG_X86_64
> @@ -73,6 +77,13 @@ unsigned long efi_get_system_table(struct boot_params *bp)
>                 return 0;
>         }
>
> +       if (efi_get_type(bp) == EFI_TYPE_64)
> +               sys_tbl_size = sizeof(efi_system_table_64_t);
> +       else
> +               sys_tbl_size = sizeof(efi_system_table_32_t);
> +
> +       kernel_add_identity_map(sys_tbl_pa, sys_tbl_pa + sys_tbl_size, 0);
> +
>         return sys_tbl_pa;
>  }
>
> @@ -92,6 +103,10 @@ static struct efi_setup_data *get_kexec_setup_data(struct boot_params *bp,
>
>         pa_data = bp->hdr.setup_data;
>         while (pa_data) {
> +               unsigned long pa_data_end = pa_data + sizeof(struct setup_data)
> +                                         + sizeof(struct efi_setup_data);
> +               kernel_add_identity_map(pa_data, pa_data_end, 0);
> +
>                 data = (struct setup_data *)pa_data;
>                 if (data->type == SETUP_EFI) {
>                         esd = (struct efi_setup_data *)(pa_data + sizeof(struct setup_data));
> @@ -160,6 +175,8 @@ int efi_get_conf_table(struct boot_params *bp, unsigned long *cfg_tbl_pa,
>                 return -EINVAL;
>         }
>
> +       kernel_add_identity_map(*cfg_tbl_pa, *cfg_tbl_pa + *cfg_tbl_len, 0);
> +
>         return 0;
>  }
>
> diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
> index 454757fbdfe5..c0ee116c4fa2 100644
> --- a/arch/x86/boot/compressed/kaslr.c
> +++ b/arch/x86/boot/compressed/kaslr.c
> @@ -688,6 +688,8 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
>         u32 nr_desc;
>         int i;
>
> +       kernel_add_identity_map((unsigned long)e, (unsigned long)(e + 1), 0);
> +
>         signature = (char *)&e->efi_loader_signature;
>         if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&
>             strncmp(signature, EFI64_LOADER_SIGNATURE, 4))
> @@ -704,6 +706,8 @@ process_efi_entries(unsigned long minimum, unsigned long image_size)
>         pmap = (e->efi_memmap | ((__u64)e->efi_memmap_hi << 32));
>  #endif
>
> +       kernel_add_identity_map(pmap, pmap + e->efi_memmap_size, 0);
> +
>         nr_desc = e->efi_memmap_size / e->efi_memdesc_size;
>         for (i = 0; i < nr_desc; i++) {
>                 md = efi_early_memdesc_ptr(pmap, e->efi_memdesc_size, i);
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 05/26] x86/boot: Support 4KB pages for identity mapping
  2022-12-15 12:37 ` [PATCH v4 05/26] x86/boot: Support 4KB pages for identity mapping Evgeniy Baskov
@ 2023-03-08  9:42   ` Ard Biesheuvel
  2023-03-08 16:11     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-08  9:42 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Current identity mapping code only supports 2M and 1G pages.
> 4KB pages are desirable for better memory protection granularity
> in compressed kernel code.
>
> Change identity mapping code to support 4KB pages and
> memory remapping with different attributes.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

This patch triggers an error reported by the build bots:

arch/x86/mm/ident_map.c:19:8: warning: no previous prototype for
'ident_split_large_pmd'


> ---
>  arch/x86/include/asm/init.h |   1 +
>  arch/x86/mm/ident_map.c     | 185 +++++++++++++++++++++++++++++-------
>  2 files changed, 154 insertions(+), 32 deletions(-)
>
> diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
> index 5f1d3c421f68..a8277ee82c51 100644
> --- a/arch/x86/include/asm/init.h
> +++ b/arch/x86/include/asm/init.h
> @@ -8,6 +8,7 @@ struct x86_mapping_info {
>         unsigned long page_flag;         /* page flag for PMD or PUD entry */
>         unsigned long offset;            /* ident mapping offset */
>         bool direct_gbpages;             /* PUD level 1GB page support */
> +       bool allow_4kpages;              /* Allow more granular mappings with 4K pages */
>         unsigned long kernpg_flag;       /* kernel pagetable flag override */
>  };
>
> diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
> index 968d7005f4a7..662e794a325d 100644
> --- a/arch/x86/mm/ident_map.c
> +++ b/arch/x86/mm/ident_map.c
> @@ -4,24 +4,127 @@
>   * included by both the compressed kernel and the regular kernel.
>   */
>
> -static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page,
> -                          unsigned long addr, unsigned long end)
> +static void ident_pte_init(struct x86_mapping_info *info, pte_t *pte_page,
> +                          unsigned long addr, unsigned long end,
> +                          unsigned long flags)
>  {
> -       addr &= PMD_MASK;
> -       for (; addr < end; addr += PMD_SIZE) {
> +       addr &= PAGE_MASK;
> +       for (; addr < end; addr += PAGE_SIZE) {
> +               pte_t *pte = pte_page + pte_index(addr);
> +
> +               set_pte(pte, __pte((addr - info->offset) | flags));
> +       }
> +}
> +
> +pte_t *ident_split_large_pmd(struct x86_mapping_info *info,
> +                            pmd_t *pmdp, unsigned long page_addr)
> +{
> +       unsigned long pmd_addr, page_flags;
> +       pte_t *pte;
> +
> +       pte = (pte_t *)info->alloc_pgt_page(info->context);
> +       if (!pte)
> +               return NULL;
> +
> +       pmd_addr = page_addr & PMD_MASK;
> +
> +       /* Not a large page - clear PSE flag */
> +       page_flags = pmd_flags(*pmdp) & ~_PSE;
> +       ident_pte_init(info, pte, pmd_addr, pmd_addr + PMD_SIZE, page_flags);
> +
> +       return pte;
> +}
> +
> +static int ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page,
> +                         unsigned long addr, unsigned long end,
> +                         unsigned long flags)
> +{
> +       unsigned long next;
> +       bool new_table = 0;
> +
> +       for (; addr < end; addr = next) {
>                 pmd_t *pmd = pmd_page + pmd_index(addr);
> +               pte_t *pte;
>
> -               if (pmd_present(*pmd))
> +               next = (addr & PMD_MASK) + PMD_SIZE;
> +               if (next > end)
> +                       next = end;
> +
> +               /*
> +                * Use 2M pages if 4k pages are not allowed or
> +                * we are not mapping extra, i.e. address and size are aligned.
> +                */
> +
> +               if (!info->allow_4kpages ||
> +                   (!(addr & ~PMD_MASK) && next == addr + PMD_SIZE)) {
> +
> +                       pmd_t pmdval;
> +
> +                       addr &= PMD_MASK;
> +                       pmdval = __pmd((addr - info->offset) | flags | _PSE);
> +                       set_pmd(pmd, pmdval);
>                         continue;
> +               }
> +
> +               /*
> +                * If currently mapped page is large, we need to split it.
> +                * The case when we don't can remap 2M page to 2M page
> +                * with different flags is already covered above.
> +                *
> +                * If there's nothing mapped to desired address,
> +                * we need to allocate new page table.
> +                */
>
> -               set_pmd(pmd, __pmd((addr - info->offset) | info->page_flag));
> +               if (pmd_large(*pmd)) {
> +                       pte = ident_split_large_pmd(info, pmd, addr);
> +                       new_table = 1;
> +               } else if (!pmd_present(*pmd)) {
> +                       pte = (pte_t *)info->alloc_pgt_page(info->context);
> +                       new_table = 1;
> +               } else {
> +                       pte = pte_offset_kernel(pmd, 0);
> +                       new_table = 0;
> +               }
> +
> +               if (!pte)
> +                       return -ENOMEM;
> +
> +               ident_pte_init(info, pte, addr, next, flags);
> +
> +               if (new_table)
> +                       set_pmd(pmd, __pmd(__pa(pte) | info->kernpg_flag));
>         }
> +
> +       return 0;
>  }
>
> +
> +pmd_t *ident_split_large_pud(struct x86_mapping_info *info,
> +                            pud_t *pudp, unsigned long page_addr)
> +{
> +       unsigned long pud_addr, page_flags;
> +       pmd_t *pmd;
> +
> +       pmd = (pmd_t *)info->alloc_pgt_page(info->context);
> +       if (!pmd)
> +               return NULL;
> +
> +       pud_addr = page_addr & PUD_MASK;
> +
> +       /* Not a large page - clear PSE flag */
> +       page_flags = pud_flags(*pudp) & ~_PSE;
> +       ident_pmd_init(info, pmd, pud_addr, pud_addr + PUD_SIZE, page_flags);
> +
> +       return pmd;
> +}
> +
> +
>  static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
>                           unsigned long addr, unsigned long end)
>  {
>         unsigned long next;
> +       bool new_table = 0;
> +       int result;
>
>         for (; addr < end; addr = next) {
>                 pud_t *pud = pud_page + pud_index(addr);
> @@ -31,28 +134,39 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
>                 if (next > end)
>                         next = end;
>
> +               /* Use 1G pages only if forced, even if they are supported. */
>                 if (info->direct_gbpages) {
>                         pud_t pudval;
> -
> -                       if (pud_present(*pud))
> -                               continue;
> +                       unsigned long flags;
>
>                         addr &= PUD_MASK;
> -                       pudval = __pud((addr - info->offset) | info->page_flag);
> +                       flags = info->page_flag | _PSE;
> +                       pudval = __pud((addr - info->offset) | flags);
> +
>                         set_pud(pud, pudval);
>                         continue;
>                 }
>
> -               if (pud_present(*pud)) {
> +               if (pud_large(*pud)) {
> +                       pmd = ident_split_large_pud(info, pud, addr);
> +                       new_table = 1;
> +               } else if (!pud_present(*pud)) {
> +                       pmd = (pmd_t *)info->alloc_pgt_page(info->context);
> +                       new_table = 1;
> +               } else {
>                         pmd = pmd_offset(pud, 0);
> -                       ident_pmd_init(info, pmd, addr, next);
> -                       continue;
> +                       new_table = 0;
>                 }
> -               pmd = (pmd_t *)info->alloc_pgt_page(info->context);
> +
>                 if (!pmd)
>                         return -ENOMEM;
> -               ident_pmd_init(info, pmd, addr, next);
> -               set_pud(pud, __pud(__pa(pmd) | info->kernpg_flag));
> +
> +               result = ident_pmd_init(info, pmd, addr, next, info->page_flag);
> +               if (result)
> +                       return result;
> +
> +               if (new_table)
> +                       set_pud(pud, __pud(__pa(pmd) | info->kernpg_flag));
>         }
>
>         return 0;
> @@ -63,6 +177,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
>  {
>         unsigned long next;
>         int result;
> +       bool new_table = 0;
>
>         for (; addr < end; addr = next) {
>                 p4d_t *p4d = p4d_page + p4d_index(addr);
> @@ -72,15 +187,14 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
>                 if (next > end)
>                         next = end;
>
> -               if (p4d_present(*p4d)) {
> +               if (!p4d_present(*p4d)) {
> +                       pud = (pud_t *)info->alloc_pgt_page(info->context);
> +                       new_table = 1;
> +               } else {
>                         pud = pud_offset(p4d, 0);
> -                       result = ident_pud_init(info, pud, addr, next);
> -                       if (result)
> -                               return result;
> -
> -                       continue;
> +                       new_table = 0;
>                 }
> -               pud = (pud_t *)info->alloc_pgt_page(info->context);
> +
>                 if (!pud)
>                         return -ENOMEM;
>
> @@ -88,19 +202,22 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
>                 if (result)
>                         return result;
>
> -               set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
> +               if (new_table)
> +                       set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
>         }
>
>         return 0;
>  }
>
> -int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
> -                             unsigned long pstart, unsigned long pend)
> +int kernel_ident_mapping_init(struct x86_mapping_info *info,
> +                             pgd_t *pgd_page, unsigned long pstart,
> +                             unsigned long pend)
>  {
>         unsigned long addr = pstart + info->offset;
>         unsigned long end = pend + info->offset;
>         unsigned long next;
>         int result;
> +       bool new_table;
>
>         /* Set the default pagetable flags if not supplied */
>         if (!info->kernpg_flag)
> @@ -117,20 +234,24 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
>                 if (next > end)
>                         next = end;
>
> -               if (pgd_present(*pgd)) {
> +               if (!pgd_present(*pgd)) {
> +                       p4d = (p4d_t *)info->alloc_pgt_page(info->context);
> +                       new_table = 1;
> +               } else {
>                         p4d = p4d_offset(pgd, 0);
> -                       result = ident_p4d_init(info, p4d, addr, next);
> -                       if (result)
> -                               return result;
> -                       continue;
> +                       new_table = 0;
>                 }
>
> -               p4d = (p4d_t *)info->alloc_pgt_page(info->context);
>                 if (!p4d)
>                         return -ENOMEM;
> +
>                 result = ident_p4d_init(info, p4d, addr, next);
>                 if (result)
>                         return result;
> +
> +               if (!new_table)
> +                       continue;
> +
>                 if (pgtable_l5_enabled()) {
>                         set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag));
>                 } else {
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 08/26] x86/boot: Map memory explicitly
  2023-03-08  9:38   ` Ard Biesheuvel
@ 2023-03-08 10:28     ` Ard Biesheuvel
  2023-03-08 16:09       ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-08 10:28 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Wed, 8 Mar 2023 at 10:38, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >
> > Implicit mappings hide possible memory errors, e.g. allocations for
> > ACPI tables were not included in boot page table size.
> >
> > Replace all implicit mappings from page fault handler with
> > explicit mappings.
> >
>
> I agree with the motivation but this patch seems to break the boot
> under SeaBIOS/QEMU, and I imagine other legacy BIOS boot scenarios as
> well.
>
> Naively, I would assume that there is simply a legacy BIOS region that
> we fail to map here, but I am fairly clueless when it comes to non-EFI
> x86 boot so take this with a grain of salt.
>

The below seems to help - not sure why exactly, but apparently legacy
BIOS needs the bootparams struct to be mapped writable?

--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -31,6 +31,7 @@
 #include <linux/ctype.h>
 #include <generated/utsversion.h>
 #include <generated/utsrelease.h>
+#include <asm/shared/pgtable.h>

 #define _SETUP
 #include <asm/setup.h> /* For COMMAND_LINE_SIZE */
@@ -688,7 +689,7 @@ process_efi_entries(unsigned long minimum,
unsigned long image_size)
        u32 nr_desc;
        int i;

-       kernel_add_identity_map((unsigned long)e, (unsigned long)(e + 1), 0);
+       kernel_add_identity_map((unsigned long)e, (unsigned long)(e +
1), MAP_WRITE);

        signature = (char *)&e->efi_loader_signature;
        if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 06/26] x86/boot: Setup memory protection for bzImage code
  2022-12-15 12:37 ` [PATCH v4 06/26] x86/boot: Setup memory protection for bzImage code Evgeniy Baskov
@ 2023-03-08 10:47   ` Ard Biesheuvel
  2023-03-08 16:15     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-08 10:47 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Use previously added code to use 4KB pages for mapping. Map compressed
> and uncompressed kernel with appropriate memory protection attributes.
> For compressed kernel set them up manually. For uncompressed kernel
> used flags specified in ELF header.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>

This patch breaks the 'nokaslr' command line option (at least with
SeaBIOS) unless I apply the hunk below:


--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -329,7 +329,8 @@ static size_t parse_elf(void *output, unsigned
long output_len,

        handle_relocations(output, output_len, virt_addr);

-       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
+       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE) ||
+           cmdline_find_option_bool("nokaslr"))
                goto skip_protect;

        for (i = 0; i < ehdr.e_phnum; i++) {
@@ -481,8 +482,10 @@ asmlinkage __visible void *extract_kernel(void
*rmode, memptr heap,
         * If KASLR is disabled input and output regions may overlap.
         * In this case we need to map region excutable as well.
         */
-       unsigned long map_flags = MAP_ALLOC | MAP_WRITE |
-                       (IS_ENABLED(CONFIG_RANDOMIZE_BASE) ? 0 : MAP_EXEC);
+       unsigned long map_flags = MAP_ALLOC | MAP_WRITE;
+       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE) ||
+           cmdline_find_option_bool("nokaslr"))
+               map_flags |= MAP_EXEC;
        phys_addr = kernel_add_identity_map(phys_addr,
                                            phys_addr + needed_size,
                                            map_flags);

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 07/26] x86/build: Check W^X of vmlinux during build
  2023-03-08  9:34   ` Ard Biesheuvel
@ 2023-03-08 16:05     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-08 16:05 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-08 12:34, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Check if there are simultaneously writable and executable
>> program segments in vmlinux ELF image and fail build if there are any.
>> 
>> This would prevent accidental introduction of RWX segments.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> ---
>>  arch/x86/boot/compressed/Makefile | 6 ++++++
>>  1 file changed, 6 insertions(+)
>> 
>> diff --git a/arch/x86/boot/compressed/Makefile 
>> b/arch/x86/boot/compressed/Makefile
>> index 1acff356d97a..4dcab38f5a38 100644
>> --- a/arch/x86/boot/compressed/Makefile
>> +++ b/arch/x86/boot/compressed/Makefile
>> @@ -112,11 +112,17 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
>>  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
>>  vmlinux-objs-$(CONFIG_EFI_STUB) += 
>> $(objtree)/drivers/firmware/efi/libstub/lib.a
>> 
>> +quiet_cmd_wx_check = WXCHK   $<
>> +cmd_wx_check = if $(OBJDUMP) -p $< | grep "flags .wx" > /dev/null; \
>> +              then (echo >&2 "$<: Simultaneously writable and 
>> executable sections are prohibited"; \
>> +                    /bin/false); fi
>> +
>>  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
>>         $(call if_changed,ld)
>> 
>>  OBJCOPYFLAGS_vmlinux.bin :=  -R .comment -S
>>  $(obj)/vmlinux.bin: vmlinux FORCE
>> +       $(call cmd,wx_check)
> 
> This breaks the way we track dependencies between make targets: the
> FORCE will result in the check being performed every time, even if
> nothing gets rebuilt.
> 
> Better to do something like the below (apologies for the alphabet soup)
> 
> 
> --- a/arch/x86/boot/compressed/Makefile
> +++ b/arch/x86/boot/compressed/Makefile
> @@ -112,18 +112,17 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
>  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
>  vmlinux-objs-$(CONFIG_EFI_STUB) +=
> $(objtree)/drivers/firmware/efi/libstub/lib.a
> 
> -quiet_cmd_wx_check = WXCHK   $<
> -cmd_wx_check = if $(OBJDUMP) -p $< | grep "flags .wx" > /dev/null; \
> -              then (echo >&2 "$<: Simultaneously writable and
> executable sections are prohibited"; \
> -                    /bin/false); fi
> +quiet_cmd_objcopy_and_wx_check = $(quiet_cmd_objcopy)
> +      cmd_objcopy_and_wx_check = if $(OBJDUMP) -p $< | grep "flags
> .wx" > /dev/null; then \
> +                                       (echo >&2 "$<: Simultaneously
> writable and executable sections are prohibited"; \
> +                                       /bin/false); else 
> $(cmd_objcopy); fi
> 
>  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
>         $(call if_changed,ld)
> 
>  OBJCOPYFLAGS_vmlinux.bin :=  -R .comment -S
>  $(obj)/vmlinux.bin: vmlinux FORCE
> -       $(call cmd,wx_check)
> -       $(call if_changed,objcopy)
> +       $(call if_changed,objcopy_and_wx_check)

Thank you for suggestion! I will fix it.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 08/26] x86/boot: Map memory explicitly
  2023-03-08 10:28     ` Ard Biesheuvel
@ 2023-03-08 16:09       ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-08 16:09 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-08 13:28, Ard Biesheuvel wrote:
> On Wed, 8 Mar 2023 at 10:38, Ard Biesheuvel <ardb@kernel.org> wrote:
>> 
>> On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> >
>> > Implicit mappings hide possible memory errors, e.g. allocations for
>> > ACPI tables were not included in boot page table size.
>> >
>> > Replace all implicit mappings from page fault handler with
>> > explicit mappings.
>> >
>> 
>> I agree with the motivation but this patch seems to break the boot
>> under SeaBIOS/QEMU, and I imagine other legacy BIOS boot scenarios as
>> well.
>> 
>> Naively, I would assume that there is simply a legacy BIOS region that
>> we fail to map here, but I am fairly clueless when it comes to non-EFI
>> x86 boot so take this with a grain of salt.
>> 
> 
> The below seems to help - not sure why exactly, but apparently legacy
> BIOS needs the bootparams struct to be mapped writable?

I think I got too eager adding mappings to everything.
In the process_efi_entries() bootparams should already be mapped, so
I will just remove the call. And AFAIK bootparams is indeed gets
written to.

> 
> --- a/arch/x86/boot/compressed/kaslr.c
> +++ b/arch/x86/boot/compressed/kaslr.c
> @@ -31,6 +31,7 @@
>  #include <linux/ctype.h>
>  #include <generated/utsversion.h>
>  #include <generated/utsrelease.h>
> +#include <asm/shared/pgtable.h>
> 
>  #define _SETUP
>  #include <asm/setup.h> /* For COMMAND_LINE_SIZE */
> @@ -688,7 +689,7 @@ process_efi_entries(unsigned long minimum,
> unsigned long image_size)
>         u32 nr_desc;
>         int i;
> 
> -       kernel_add_identity_map((unsigned long)e, (unsigned long)(e + 
> 1), 0);
> +       kernel_add_identity_map((unsigned long)e, (unsigned long)(e +
> 1), MAP_WRITE);
> 
>         signature = (char *)&e->efi_loader_signature;
>         if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&

Thanks,
Evgeniy Baskov

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 05/26] x86/boot: Support 4KB pages for identity mapping
  2023-03-08  9:42   ` Ard Biesheuvel
@ 2023-03-08 16:11     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-08 16:11 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-08 12:42, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Current identity mapping code only supports 2M and 1G pages.
>> 4KB pages are desirable for better memory protection granularity
>> in compressed kernel code.
>> 
>> Change identity mapping code to support 4KB pages and
>> memory remapping with different attributes.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> 
> This patch triggers an error reported by the build bots:
> 
> arch/x86/mm/ident_map.c:19:8: warning: no previous prototype for
> 'ident_split_large_pmd'

Thanks! I'll fix them (and all of the others from the bot emails)

> 
> 
>> ---
>>  arch/x86/include/asm/init.h |   1 +
>>  arch/x86/mm/ident_map.c     | 185 
>> +++++++++++++++++++++++++++++-------
>>  2 files changed, 154 insertions(+), 32 deletions(-)
>> 
>> diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
>> index 5f1d3c421f68..a8277ee82c51 100644
>> --- a/arch/x86/include/asm/init.h
>> +++ b/arch/x86/include/asm/init.h
>> @@ -8,6 +8,7 @@ struct x86_mapping_info {
>>         unsigned long page_flag;         /* page flag for PMD or PUD 
>> entry */
>>         unsigned long offset;            /* ident mapping offset */
>>         bool direct_gbpages;             /* PUD level 1GB page support 
>> */
>> +       bool allow_4kpages;              /* Allow more granular 
>> mappings with 4K pages */
>>         unsigned long kernpg_flag;       /* kernel pagetable flag 
>> override */
>>  };
>> 
>> diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
>> index 968d7005f4a7..662e794a325d 100644
>> --- a/arch/x86/mm/ident_map.c
>> +++ b/arch/x86/mm/ident_map.c
>> @@ -4,24 +4,127 @@
>>   * included by both the compressed kernel and the regular kernel.
>>   */
>> 
>> -static void ident_pmd_init(struct x86_mapping_info *info, pmd_t 
>> *pmd_page,
>> -                          unsigned long addr, unsigned long end)
>> +static void ident_pte_init(struct x86_mapping_info *info, pte_t 
>> *pte_page,
>> +                          unsigned long addr, unsigned long end,
>> +                          unsigned long flags)
>>  {
>> -       addr &= PMD_MASK;
>> -       for (; addr < end; addr += PMD_SIZE) {
>> +       addr &= PAGE_MASK;
>> +       for (; addr < end; addr += PAGE_SIZE) {
>> +               pte_t *pte = pte_page + pte_index(addr);
>> +
>> +               set_pte(pte, __pte((addr - info->offset) | flags));
>> +       }
>> +}
>> +
>> +pte_t *ident_split_large_pmd(struct x86_mapping_info *info,
>> +                            pmd_t *pmdp, unsigned long page_addr)
>> +{
>> +       unsigned long pmd_addr, page_flags;
>> +       pte_t *pte;
>> +
>> +       pte = (pte_t *)info->alloc_pgt_page(info->context);
>> +       if (!pte)
>> +               return NULL;
>> +
>> +       pmd_addr = page_addr & PMD_MASK;
>> +
>> +       /* Not a large page - clear PSE flag */
>> +       page_flags = pmd_flags(*pmdp) & ~_PSE;
>> +       ident_pte_init(info, pte, pmd_addr, pmd_addr + PMD_SIZE, 
>> page_flags);
>> +
>> +       return pte;
>> +}
>> +
>> +static int ident_pmd_init(struct x86_mapping_info *info, pmd_t 
>> *pmd_page,
>> +                         unsigned long addr, unsigned long end,
>> +                         unsigned long flags)
>> +{
>> +       unsigned long next;
>> +       bool new_table = 0;
>> +
>> +       for (; addr < end; addr = next) {
>>                 pmd_t *pmd = pmd_page + pmd_index(addr);
>> +               pte_t *pte;
>> 
>> -               if (pmd_present(*pmd))
>> +               next = (addr & PMD_MASK) + PMD_SIZE;
>> +               if (next > end)
>> +                       next = end;
>> +
>> +               /*
>> +                * Use 2M pages if 4k pages are not allowed or
>> +                * we are not mapping extra, i.e. address and size are 
>> aligned.
>> +                */
>> +
>> +               if (!info->allow_4kpages ||
>> +                   (!(addr & ~PMD_MASK) && next == addr + PMD_SIZE)) 
>> {
>> +
>> +                       pmd_t pmdval;
>> +
>> +                       addr &= PMD_MASK;
>> +                       pmdval = __pmd((addr - info->offset) | flags | 
>> _PSE);
>> +                       set_pmd(pmd, pmdval);
>>                         continue;
>> +               }
>> +
>> +               /*
>> +                * If currently mapped page is large, we need to split 
>> it.
>> +                * The case when we don't can remap 2M page to 2M page
>> +                * with different flags is already covered above.
>> +                *
>> +                * If there's nothing mapped to desired address,
>> +                * we need to allocate new page table.
>> +                */
>> 
>> -               set_pmd(pmd, __pmd((addr - info->offset) | 
>> info->page_flag));
>> +               if (pmd_large(*pmd)) {
>> +                       pte = ident_split_large_pmd(info, pmd, addr);
>> +                       new_table = 1;
>> +               } else if (!pmd_present(*pmd)) {
>> +                       pte = (pte_t 
>> *)info->alloc_pgt_page(info->context);
>> +                       new_table = 1;
>> +               } else {
>> +                       pte = pte_offset_kernel(pmd, 0);
>> +                       new_table = 0;
>> +               }
>> +
>> +               if (!pte)
>> +                       return -ENOMEM;
>> +
>> +               ident_pte_init(info, pte, addr, next, flags);
>> +
>> +               if (new_table)
>> +                       set_pmd(pmd, __pmd(__pa(pte) | 
>> info->kernpg_flag));
>>         }
>> +
>> +       return 0;
>>  }
>> 
>> +
>> +pmd_t *ident_split_large_pud(struct x86_mapping_info *info,
>> +                            pud_t *pudp, unsigned long page_addr)
>> +{
>> +       unsigned long pud_addr, page_flags;
>> +       pmd_t *pmd;
>> +
>> +       pmd = (pmd_t *)info->alloc_pgt_page(info->context);
>> +       if (!pmd)
>> +               return NULL;
>> +
>> +       pud_addr = page_addr & PUD_MASK;
>> +
>> +       /* Not a large page - clear PSE flag */
>> +       page_flags = pud_flags(*pudp) & ~_PSE;
>> +       ident_pmd_init(info, pmd, pud_addr, pud_addr + PUD_SIZE, 
>> page_flags);
>> +
>> +       return pmd;
>> +}
>> +
>> +
>>  static int ident_pud_init(struct x86_mapping_info *info, pud_t 
>> *pud_page,
>>                           unsigned long addr, unsigned long end)
>>  {
>>         unsigned long next;
>> +       bool new_table = 0;
>> +       int result;
>> 
>>         for (; addr < end; addr = next) {
>>                 pud_t *pud = pud_page + pud_index(addr);
>> @@ -31,28 +134,39 @@ static int ident_pud_init(struct x86_mapping_info 
>> *info, pud_t *pud_page,
>>                 if (next > end)
>>                         next = end;
>> 
>> +               /* Use 1G pages only if forced, even if they are 
>> supported. */
>>                 if (info->direct_gbpages) {
>>                         pud_t pudval;
>> -
>> -                       if (pud_present(*pud))
>> -                               continue;
>> +                       unsigned long flags;
>> 
>>                         addr &= PUD_MASK;
>> -                       pudval = __pud((addr - info->offset) | 
>> info->page_flag);
>> +                       flags = info->page_flag | _PSE;
>> +                       pudval = __pud((addr - info->offset) | flags);
>> +
>>                         set_pud(pud, pudval);
>>                         continue;
>>                 }
>> 
>> -               if (pud_present(*pud)) {
>> +               if (pud_large(*pud)) {
>> +                       pmd = ident_split_large_pud(info, pud, addr);
>> +                       new_table = 1;
>> +               } else if (!pud_present(*pud)) {
>> +                       pmd = (pmd_t 
>> *)info->alloc_pgt_page(info->context);
>> +                       new_table = 1;
>> +               } else {
>>                         pmd = pmd_offset(pud, 0);
>> -                       ident_pmd_init(info, pmd, addr, next);
>> -                       continue;
>> +                       new_table = 0;
>>                 }
>> -               pmd = (pmd_t *)info->alloc_pgt_page(info->context);
>> +
>>                 if (!pmd)
>>                         return -ENOMEM;
>> -               ident_pmd_init(info, pmd, addr, next);
>> -               set_pud(pud, __pud(__pa(pmd) | info->kernpg_flag));
>> +
>> +               result = ident_pmd_init(info, pmd, addr, next, 
>> info->page_flag);
>> +               if (result)
>> +                       return result;
>> +
>> +               if (new_table)
>> +                       set_pud(pud, __pud(__pa(pmd) | 
>> info->kernpg_flag));
>>         }
>> 
>>         return 0;
>> @@ -63,6 +177,7 @@ static int ident_p4d_init(struct x86_mapping_info 
>> *info, p4d_t *p4d_page,
>>  {
>>         unsigned long next;
>>         int result;
>> +       bool new_table = 0;
>> 
>>         for (; addr < end; addr = next) {
>>                 p4d_t *p4d = p4d_page + p4d_index(addr);
>> @@ -72,15 +187,14 @@ static int ident_p4d_init(struct x86_mapping_info 
>> *info, p4d_t *p4d_page,
>>                 if (next > end)
>>                         next = end;
>> 
>> -               if (p4d_present(*p4d)) {
>> +               if (!p4d_present(*p4d)) {
>> +                       pud = (pud_t 
>> *)info->alloc_pgt_page(info->context);
>> +                       new_table = 1;
>> +               } else {
>>                         pud = pud_offset(p4d, 0);
>> -                       result = ident_pud_init(info, pud, addr, 
>> next);
>> -                       if (result)
>> -                               return result;
>> -
>> -                       continue;
>> +                       new_table = 0;
>>                 }
>> -               pud = (pud_t *)info->alloc_pgt_page(info->context);
>> +
>>                 if (!pud)
>>                         return -ENOMEM;
>> 
>> @@ -88,19 +202,22 @@ static int ident_p4d_init(struct x86_mapping_info 
>> *info, p4d_t *p4d_page,
>>                 if (result)
>>                         return result;
>> 
>> -               set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
>> +               if (new_table)
>> +                       set_p4d(p4d, __p4d(__pa(pud) | 
>> info->kernpg_flag));
>>         }
>> 
>>         return 0;
>>  }
>> 
>> -int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t 
>> *pgd_page,
>> -                             unsigned long pstart, unsigned long 
>> pend)
>> +int kernel_ident_mapping_init(struct x86_mapping_info *info,
>> +                             pgd_t *pgd_page, unsigned long pstart,
>> +                             unsigned long pend)
>>  {
>>         unsigned long addr = pstart + info->offset;
>>         unsigned long end = pend + info->offset;
>>         unsigned long next;
>>         int result;
>> +       bool new_table;
>> 
>>         /* Set the default pagetable flags if not supplied */
>>         if (!info->kernpg_flag)
>> @@ -117,20 +234,24 @@ int kernel_ident_mapping_init(struct 
>> x86_mapping_info *info, pgd_t *pgd_page,
>>                 if (next > end)
>>                         next = end;
>> 
>> -               if (pgd_present(*pgd)) {
>> +               if (!pgd_present(*pgd)) {
>> +                       p4d = (p4d_t 
>> *)info->alloc_pgt_page(info->context);
>> +                       new_table = 1;
>> +               } else {
>>                         p4d = p4d_offset(pgd, 0);
>> -                       result = ident_p4d_init(info, p4d, addr, 
>> next);
>> -                       if (result)
>> -                               return result;
>> -                       continue;
>> +                       new_table = 0;
>>                 }
>> 
>> -               p4d = (p4d_t *)info->alloc_pgt_page(info->context);
>>                 if (!p4d)
>>                         return -ENOMEM;
>> +
>>                 result = ident_p4d_init(info, p4d, addr, next);
>>                 if (result)
>>                         return result;
>> +
>> +               if (!new_table)
>> +                       continue;
>> +
>>                 if (pgtable_l5_enabled()) {
>>                         set_pgd(pgd, __pgd(__pa(p4d) | 
>> info->kernpg_flag));
>>                 } else {
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 06/26] x86/boot: Setup memory protection for bzImage code
  2023-03-08 10:47   ` Ard Biesheuvel
@ 2023-03-08 16:15     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-08 16:15 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-08 13:47, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Use previously added code to use 4KB pages for mapping. Map compressed
>> and uncompressed kernel with appropriate memory protection attributes.
>> For compressed kernel set them up manually. For uncompressed kernel
>> used flags specified in ELF header.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> 
> 
> This patch breaks the 'nokaslr' command line option (at least with
> SeaBIOS) unless I apply the hunk below:
> 

Oh, I didn't think of that option.. Thanks!
I will also add the check to the identity mapping,
so the warning won't be emitted with 'nokaslr'.

> 
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -329,7 +329,8 @@ static size_t parse_elf(void *output, unsigned
> long output_len,
> 
>         handle_relocations(output, output_len, virt_addr);
> 
> -       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE))
> +       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE) ||
> +           cmdline_find_option_bool("nokaslr"))
>                 goto skip_protect;
> 
>         for (i = 0; i < ehdr.e_phnum; i++) {
> @@ -481,8 +482,10 @@ asmlinkage __visible void *extract_kernel(void
> *rmode, memptr heap,
>          * If KASLR is disabled input and output regions may overlap.
>          * In this case we need to map region excutable as well.
>          */
> -       unsigned long map_flags = MAP_ALLOC | MAP_WRITE |
> -                       (IS_ENABLED(CONFIG_RANDOMIZE_BASE) ? 0 : 
> MAP_EXEC);
> +       unsigned long map_flags = MAP_ALLOC | MAP_WRITE;
> +       if (!IS_ENABLED(CONFIG_RANDOMIZE_BASE) ||
> +           cmdline_find_option_bool("nokaslr"))
> +               map_flags |= MAP_EXEC;
>         phys_addr = kernel_add_identity_map(phys_addr,
>                                             phys_addr + needed_size,
>                                             map_flags);

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 19/26] x86/build: Cleanup tools/build.c
  2022-12-15 12:38 ` [PATCH v4 19/26] x86/build: Cleanup tools/build.c Evgeniy Baskov
@ 2023-03-09 15:57   ` Ard Biesheuvel
  2023-03-09 16:25     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-09 15:57 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Use newer C standard. Since kernel requires C99 compiler now,
> we can make use of the new features to make the core more readable.
>
> Use mmap() for reading files also to make things simpler.
>
> Replace most magic numbers with defines.
>
> Should have no functional changes. This is done in preparation for the
> next changes that makes generated PE header more spec compliant.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/boot/tools/build.c | 387 +++++++++++++++++++++++-------------
>  1 file changed, 245 insertions(+), 142 deletions(-)
>
> diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
> index bd247692b701..fbc5315af032 100644
> --- a/arch/x86/boot/tools/build.c
> +++ b/arch/x86/boot/tools/build.c
> @@ -25,20 +25,21 @@
>   * Substantially overhauled by H. Peter Anvin, April 2007
>   */
>
> +#include <fcntl.h>
> +#include <stdarg.h>
> +#include <stdint.h>
>  #include <stdio.h>
> -#include <string.h>
>  #include <stdlib.h>
> -#include <stdarg.h>
> -#include <sys/types.h>
> +#include <string.h>
> +#include <sys/mman.h>
>  #include <sys/stat.h>
> +#include <sys/types.h>
>  #include <unistd.h>
> -#include <fcntl.h>
> -#include <sys/mman.h>
> +
>  #include <tools/le_byteshift.h>
> +#include <linux/pe.h>
>
> -typedef unsigned char  u8;
> -typedef unsigned short u16;
> -typedef unsigned int   u32;
> +#define round_up(x, n) (((x) + (n) - 1) & ~((n) - 1))
>
>  #define DEFAULT_MAJOR_ROOT 0
>  #define DEFAULT_MINOR_ROOT 0
> @@ -48,8 +49,13 @@ typedef unsigned int   u32;
>  #define SETUP_SECT_MIN 5
>  #define SETUP_SECT_MAX 64
>
> +#define PARAGRAPH_SIZE 16
> +#define SECTOR_SIZE 512
> +#define FILE_ALIGNMENT 512
> +#define SECTION_ALIGNMENT 4096
> +
>  /* This must be large enough to hold the entire setup */
> -u8 buf[SETUP_SECT_MAX*512];
> +uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
>
>  #define PECOFF_RELOC_RESERVE 0x20
>
> @@ -59,6 +65,52 @@ u8 buf[SETUP_SECT_MAX*512];
>  #define PECOFF_COMPAT_RESERVE 0x0
>  #endif
>
> +#define RELOC_SECTION_SIZE 10
> +
> +/* PE header has different format depending on the architecture */
> +#ifdef CONFIG_X86_64
> +typedef struct pe32plus_opt_hdr pe_opt_hdr;
> +#else
> +typedef struct pe32_opt_hdr pe_opt_hdr;
> +#endif
> +
> +static inline struct pe_hdr *get_pe_header(uint8_t *buf)
> +{
> +       uint32_t pe_offset = get_unaligned_le32(buf+MZ_HEADER_PEADDR_OFFSET);
> +       return (struct pe_hdr *)(buf + pe_offset);
> +}
> +
> +static inline pe_opt_hdr *get_pe_opt_header(uint8_t *buf)
> +{
> +       return (pe_opt_hdr *)(get_pe_header(buf) + 1);
> +}
> +
> +static inline struct section_header *get_sections(uint8_t *buf)
> +{
> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> +       uint32_t n_data_dirs = get_unaligned_le32(&hdr->data_dirs);
> +       uint8_t *sections = (uint8_t *)(hdr + 1) + n_data_dirs*sizeof(struct data_dirent);
> +       return  (struct section_header *)sections;
> +}
> +
> +static inline struct data_directory *get_data_dirs(uint8_t *buf)
> +{
> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> +       return (struct data_directory *)(hdr + 1);
> +}
> +
> +#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES

Can we drop this conditional?
> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | IMAGE_SCN_ALIGN_4096BYTES)
> +#define SCN_RX (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
> +#define SCN_RO (IMAGE_SCN_MEM_READ | IMAGE_SCN_ALIGN_4096BYTES)

Please drop the alignment flags - they don't apply to executable only
object files.

> +#else
> +/* With memory protection disabled all sections are RWX */
> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | \
> +               IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
> +#define SCN_RX SCN_RW
> +#define SCN_RO SCN_RW
> +#endif
> +
>  static unsigned long efi32_stub_entry;
>  static unsigned long efi64_stub_entry;
>  static unsigned long efi_pe_entry;
> @@ -70,7 +122,7 @@ static unsigned long _end;
>
>  /*----------------------------------------------------------------------*/
>
> -static const u32 crctab32[] = {
> +static const uint32_t crctab32[] = {

Replacing all the type names makes this patch very messy. Can we back
that out please?

>         0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
>         0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
>         0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
> @@ -125,12 +177,12 @@ static const u32 crctab32[] = {
>         0x2d02ef8d
>  };
>
> -static u32 partial_crc32_one(u8 c, u32 crc)
> +static uint32_t partial_crc32_one(uint8_t c, uint32_t crc)
>  {
>         return crctab32[(crc ^ c) & 0xff] ^ (crc >> 8);
>  }
>
> -static u32 partial_crc32(const u8 *s, int len, u32 crc)
> +static uint32_t partial_crc32(const uint8_t *s, int len, uint32_t crc)
>  {
>         while (len--)
>                 crc = partial_crc32_one(*s++, crc);
> @@ -152,57 +204,106 @@ static void usage(void)
>         die("Usage: build setup system zoffset.h image");
>  }
>
> +static void *map_file(const char *path, size_t *psize)
> +{
> +       struct stat statbuf;
> +       size_t size;
> +       void *addr;
> +       int fd;
> +
> +       fd = open(path, O_RDONLY);
> +       if (fd < 0)
> +               die("Unable to open `%s': %m", path);
> +       if (fstat(fd, &statbuf))
> +               die("Unable to stat `%s': %m", path);
> +
> +       size = statbuf.st_size;
> +       /*
> +        * Map one byte more, to allow adding null-terminator
> +        * for text files.
> +        */
> +       addr = mmap(NULL, size + 1, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
> +       if (addr == MAP_FAILED)
> +               die("Unable to mmap '%s': %m", path);
> +
> +       close(fd);
> +
> +       *psize = size;
> +       return addr;
> +}
> +
> +static void unmap_file(void *addr, size_t size)
> +{
> +       munmap(addr, size + 1);
> +}
> +
> +static void *map_output_file(const char *path, size_t size)
> +{
> +       void *addr;
> +       int fd;
> +
> +       fd = open(path, O_RDWR | O_CREAT, 0660);
> +       if (fd < 0)
> +               die("Unable to create `%s': %m", path);
> +
> +       if (ftruncate(fd, size))
> +               die("Unable to resize `%s': %m", path);
> +
> +       addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> +       if (addr == MAP_FAILED)
> +               die("Unable to mmap '%s': %m", path);
> +
> +       return addr;
> +}
> +
>  #ifdef CONFIG_EFI_STUB
>
> -static void update_pecoff_section_header_fields(char *section_name, u32 vma, u32 size, u32 datasz, u32 offset)
> +static void update_pecoff_section_header_fields(char *section_name, uint32_t vma,
> +                                               uint32_t size, uint32_t datasz,
> +                                               uint32_t offset)
>  {
>         unsigned int pe_header;
>         unsigned short num_sections;
> -       u8 *section;
> +       struct section_header *section;
>
> -       pe_header = get_unaligned_le32(&buf[0x3c]);
> -       num_sections = get_unaligned_le16(&buf[pe_header + 6]);
> -
> -#ifdef CONFIG_X86_32
> -       section = &buf[pe_header + 0xa8];
> -#else
> -       section = &buf[pe_header + 0xb8];
> -#endif
> +       struct pe_hdr *hdr = get_pe_header(buf);
> +       num_sections = get_unaligned_le16(&hdr->sections);
> +       section = get_sections(buf);
>
>         while (num_sections > 0) {
> -               if (strncmp((char*)section, section_name, 8) == 0) {
> +               if (strncmp(section->name, section_name, 8) == 0) {
>                         /* section header size field */
> -                       put_unaligned_le32(size, section + 0x8);
> +                       put_unaligned_le32(size, &section->virtual_size);
>
>                         /* section header vma field */
> -                       put_unaligned_le32(vma, section + 0xc);
> +                       put_unaligned_le32(vma, &section->virtual_address);
>
>                         /* section header 'size of initialised data' field */
> -                       put_unaligned_le32(datasz, section + 0x10);
> +                       put_unaligned_le32(datasz, &section->raw_data_size);
>
>                         /* section header 'file offset' field */
> -                       put_unaligned_le32(offset, section + 0x14);
> +                       put_unaligned_le32(offset, &section->data_addr);
>
>                         break;
>                 }
> -               section += 0x28;
> +               section++;
>                 num_sections--;
>         }
>  }
>
> -static void update_pecoff_section_header(char *section_name, u32 offset, u32 size)
> +static void update_pecoff_section_header(char *section_name, uint32_t offset, uint32_t size)
>  {
>         update_pecoff_section_header_fields(section_name, offset, size, size, offset);
>  }
>
>  static void update_pecoff_setup_and_reloc(unsigned int size)
>  {
> -       u32 setup_offset = 0x200;
> -       u32 reloc_offset = size - PECOFF_RELOC_RESERVE - PECOFF_COMPAT_RESERVE;
> +       uint32_t setup_offset = SECTOR_SIZE;
> +       uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE - PECOFF_COMPAT_RESERVE;
>  #ifdef CONFIG_EFI_MIXED
> -       u32 compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
> +       uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
>  #endif
> -       u32 setup_size = reloc_offset - setup_offset;
> +       uint32_t setup_size = reloc_offset - setup_offset;
>
>         update_pecoff_section_header(".setup", setup_offset, setup_size);
>         update_pecoff_section_header(".reloc", reloc_offset, PECOFF_RELOC_RESERVE);
> @@ -211,8 +312,8 @@ static void update_pecoff_setup_and_reloc(unsigned int size)
>          * Modify .reloc section contents with a single entry. The
>          * relocation is applied to offset 10 of the relocation section.
>          */
> -       put_unaligned_le32(reloc_offset + 10, &buf[reloc_offset]);
> -       put_unaligned_le32(10, &buf[reloc_offset + 4]);
> +       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, &buf[reloc_offset]);
> +       put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset + 4]);
>
>  #ifdef CONFIG_EFI_MIXED
>         update_pecoff_section_header(".compat", compat_offset, PECOFF_COMPAT_RESERVE);
> @@ -224,19 +325,17 @@ static void update_pecoff_setup_and_reloc(unsigned int size)
>          */
>         buf[compat_offset] = 0x1;
>         buf[compat_offset + 1] = 0x8;
> -       put_unaligned_le16(0x14c, &buf[compat_offset + 2]);
> +       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset + 2]);
>         put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset + 4]);
>  #endif
>  }
>
> -static void update_pecoff_text(unsigned int text_start, unsigned int file_sz,
> +static unsigned int update_pecoff_sections(unsigned int text_start, unsigned int text_sz,
>                                unsigned int init_sz)
>  {
> -       unsigned int pe_header;
> -       unsigned int text_sz = file_sz - text_start;
> +       unsigned int file_sz = text_start + text_sz;
>         unsigned int bss_sz = init_sz - file_sz;
> -
> -       pe_header = get_unaligned_le32(&buf[0x3c]);
> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>
>         /*
>          * The PE/COFF loader may load the image at an address which is
> @@ -254,18 +353,20 @@ static void update_pecoff_text(unsigned int text_start, unsigned int file_sz,
>          * Size of code: Subtract the size of the first sector (512 bytes)
>          * which includes the header.
>          */
> -       put_unaligned_le32(file_sz - 512 + bss_sz, &buf[pe_header + 0x1c]);
> +       put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz, &hdr->text_size);
>
>         /* Size of image */
> -       put_unaligned_le32(init_sz, &buf[pe_header + 0x50]);
> +       put_unaligned_le32(init_sz, &hdr->image_size);
>
>         /*
>          * Address of entry point for PE/COFF executable
>          */
> -       put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 0x28]);
> +       put_unaligned_le32(text_start + efi_pe_entry, &hdr->entry_point);
>
>         update_pecoff_section_header_fields(".text", text_start, text_sz + bss_sz,
>                                             text_sz, text_start);
> +
> +       return text_start + file_sz;
>  }
>
>  static int reserve_pecoff_reloc_section(int c)
> @@ -275,7 +376,7 @@ static int reserve_pecoff_reloc_section(int c)
>         return PECOFF_RELOC_RESERVE;
>  }
>
> -static void efi_stub_defaults(void)
> +static void efi_stub_update_defaults(void)
>  {
>         /* Defaults for old kernel */
>  #ifdef CONFIG_X86_32
> @@ -298,7 +399,7 @@ static void efi_stub_entry_update(void)
>
>  #ifdef CONFIG_EFI_MIXED
>         if (efi32_stub_entry != addr)
> -               die("32-bit and 64-bit EFI entry points do not match\n");
> +               die("32-bit and 64-bit EFI entry points do not match");
>  #endif
>  #endif
>         put_unaligned_le32(addr, &buf[0x264]);
> @@ -310,7 +411,7 @@ static inline void update_pecoff_setup_and_reloc(unsigned int size) {}
>  static inline void update_pecoff_text(unsigned int text_start,
>                                       unsigned int file_sz,
>                                       unsigned int init_sz) {}
> -static inline void efi_stub_defaults(void) {}
> +static inline void efi_stub_update_defaults(void) {}
>  static inline void efi_stub_entry_update(void) {}
>
>  static inline int reserve_pecoff_reloc_section(int c)
> @@ -338,20 +439,15 @@ static int reserve_pecoff_compat_section(int c)
>
>  static void parse_zoffset(char *fname)
>  {
> -       FILE *file;
> -       char *p;
> -       int c;
> +       size_t size;
> +       char *data, *p;
>
> -       file = fopen(fname, "r");
> -       if (!file)
> -               die("Unable to open `%s': %m", fname);
> -       c = fread(buf, 1, sizeof(buf) - 1, file);
> -       if (ferror(file))
> -               die("read-error on `zoffset.h'");
> -       fclose(file);
> -       buf[c] = 0;
> +       data = map_file(fname, &size);
>
> -       p = (char *)buf;
> +       /* We can do that, since we mapped one byte more */
> +       data[size] = 0;
> +
> +       p = (char *)data;
>
>         while (p && *p) {
>                 PARSE_ZOFS(p, efi32_stub_entry);
> @@ -367,82 +463,99 @@ static void parse_zoffset(char *fname)
>                 while (p && (*p == '\r' || *p == '\n'))
>                         p++;
>         }
> +
> +       unmap_file(data, size);
>  }
>
> -int main(int argc, char ** argv)
> +static unsigned int read_setup(char *path)
>  {
> -       unsigned int i, sz, setup_sectors, init_sz;
> -       int c;
> -       u32 sys_size;
> -       struct stat sb;
> -       FILE *file, *dest;
> -       int fd;
> -       void *kernel;
> -       u32 crc = 0xffffffffUL;
> -
> -       efi_stub_defaults();
> -
> -       if (argc != 5)
> -               usage();
> -       parse_zoffset(argv[3]);
> -
> -       dest = fopen(argv[4], "w");
> -       if (!dest)
> -               die("Unable to write `%s': %m", argv[4]);
> +       FILE *file;
> +       unsigned int setup_size, file_size;
>
>         /* Copy the setup code */
> -       file = fopen(argv[1], "r");
> +       file = fopen(path, "r");
>         if (!file)
> -               die("Unable to open `%s': %m", argv[1]);
> -       c = fread(buf, 1, sizeof(buf), file);
> +               die("Unable to open `%s': %m", path);
> +
> +       file_size = fread(buf, 1, sizeof(buf), file);
>         if (ferror(file))
>                 die("read-error on `setup'");
> -       if (c < 1024)
> +
> +       if (file_size < 2 * SECTOR_SIZE)
>                 die("The setup must be at least 1024 bytes");
> -       if (get_unaligned_le16(&buf[510]) != 0xAA55)
> +
> +       if (get_unaligned_le16(&buf[SECTOR_SIZE - 2]) != 0xAA55)
>                 die("Boot block hasn't got boot flag (0xAA55)");
> +
>         fclose(file);
>
> -       c += reserve_pecoff_compat_section(c);
> -       c += reserve_pecoff_reloc_section(c);
> +       /* Reserve space for PE sections */
> +       file_size += reserve_pecoff_compat_section(file_size);
> +       file_size += reserve_pecoff_reloc_section(file_size);
>
>         /* Pad unused space with zeros */
> -       setup_sectors = (c + 511) / 512;
> -       if (setup_sectors < SETUP_SECT_MIN)
> -               setup_sectors = SETUP_SECT_MIN;
> -       i = setup_sectors*512;
> -       memset(buf+c, 0, i-c);
>
> -       update_pecoff_setup_and_reloc(i);
> +       setup_size = round_up(file_size, SECTOR_SIZE);
> +
> +       if (setup_size < SETUP_SECT_MIN * SECTOR_SIZE)
> +               setup_size = SETUP_SECT_MIN * SECTOR_SIZE;
> +
> +       /*
> +        * Global buffer is already initialised
> +        * to 0, but just in case, zero out padding.
> +        */
> +
> +       memset(buf + file_size, 0, setup_size - file_size);
> +
> +       return setup_size;
> +}
> +
> +int main(int argc, char **argv)
> +{
> +       size_t kern_file_size;
> +       unsigned int setup_size;
> +       unsigned int setup_sectors;
> +       unsigned int init_size;
> +       unsigned int total_size;
> +       unsigned int kern_size;
> +       void *kernel;
> +       uint32_t crc = 0xffffffffUL;
> +       uint8_t *output;
> +
> +       if (argc != 5)
> +               usage();
> +
> +       efi_stub_update_defaults();
> +       parse_zoffset(argv[3]);
> +
> +       setup_size = read_setup(argv[1]);
> +
> +       setup_sectors = setup_size/SECTOR_SIZE;
>
>         /* Set the default root device */
>         put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]);
>
> -       /* Open and stat the kernel file */
> -       fd = open(argv[2], O_RDONLY);
> -       if (fd < 0)
> -               die("Unable to open `%s': %m", argv[2]);
> -       if (fstat(fd, &sb))
> -               die("Unable to stat `%s': %m", argv[2]);
> -       sz = sb.st_size;
> -       kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0);
> -       if (kernel == MAP_FAILED)
> -               die("Unable to mmap '%s': %m", argv[2]);
> -       /* Number of 16-byte paragraphs, including space for a 4-byte CRC */
> -       sys_size = (sz + 15 + 4) / 16;
> +       /* Map kernel file to memory */
> +       kernel = map_file(argv[2], &kern_file_size);
> +
>  #ifdef CONFIG_EFI_STUB
> -       /*
> -        * COFF requires minimum 32-byte alignment of sections, and
> -        * adding a signature is problematic without that alignment.
> -        */
> -       sys_size = (sys_size + 1) & ~1;
> +       /* PE specification require 512-byte minimum section file alignment */
> +       kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
> +       update_pecoff_setup_and_reloc(setup_size);
> +#else
> +       /* Number of 16-byte paragraphs, including space for a 4-byte CRC */
> +       kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
>  #endif
>
>         /* Patch the setup code with the appropriate size parameters */
> -       buf[0x1f1] = setup_sectors-1;
> -       put_unaligned_le32(sys_size, &buf[0x1f4]);
> +       buf[0x1f1] = setup_sectors - 1;
> +       put_unaligned_le32(kern_size/PARAGRAPH_SIZE, &buf[0x1f4]);
> +
> +       /* Update kernel_info offset. */
> +       put_unaligned_le32(kernel_info, &buf[0x268]);
> +
> +       init_size = get_unaligned_le32(&buf[0x260]);
>
> -       init_sz = get_unaligned_le32(&buf[0x260]);
>  #ifdef CONFIG_EFI_STUB
>         /*
>          * The decompression buffer will start at ImageBase. When relocating
> @@ -458,45 +571,35 @@ int main(int argc, char ** argv)
>          * For future-proofing, increase init_sz if necessary.
>          */
>
> -       if (init_sz - _end < i + _ehead) {
> -               init_sz = (i + _ehead + _end + 4095) & ~4095;
> -               put_unaligned_le32(init_sz, &buf[0x260]);
> +       if (init_size - _end < setup_size + _ehead) {
> +               init_size = round_up(setup_size + _ehead + _end, SECTION_ALIGNMENT);
> +               put_unaligned_le32(init_size, &buf[0x260]);
>         }
> -#endif
> -       update_pecoff_text(setup_sectors * 512, i + (sys_size * 16), init_sz);
>
> -       efi_stub_entry_update();
> -
> -       /* Update kernel_info offset. */
> -       put_unaligned_le32(kernel_info, &buf[0x268]);
> +       total_size = update_pecoff_sections(setup_size, kern_size, init_size);
>
> -       crc = partial_crc32(buf, i, crc);
> -       if (fwrite(buf, 1, i, dest) != i)
> -               die("Writing setup failed");
> +       efi_stub_entry_update();
> +#else
> +       (void)init_size;
> +       total_size = setup_size + kern_size;
> +#endif
>
> -       /* Copy the kernel code */
> -       crc = partial_crc32(kernel, sz, crc);
> -       if (fwrite(kernel, 1, sz, dest) != sz)
> -               die("Writing kernel failed");
> +       output = map_output_file(argv[4], total_size);
>
> -       /* Add padding leaving 4 bytes for the checksum */
> -       while (sz++ < (sys_size*16) - 4) {
> -               crc = partial_crc32_one('\0', crc);
> -               if (fwrite("\0", 1, 1, dest) != 1)
> -                       die("Writing padding failed");
> -       }
> +       memcpy(output, buf, setup_size);
> +       memcpy(output + setup_size, kernel, kern_file_size);
> +       memset(output + setup_size + kern_file_size, 0, kern_size - kern_file_size);
>
> -       /* Write the CRC */
> -       put_unaligned_le32(crc, buf);
> -       if (fwrite(buf, 1, 4, dest) != 4)
> -               die("Writing CRC failed");
> +       /* Calculate and write kernel checksum. */
> +       crc = partial_crc32(output, total_size - 4, crc);
> +       put_unaligned_le32(crc, &output[total_size - 4]);
>
> -       /* Catch any delayed write failures */
> -       if (fclose(dest))
> -               die("Writing image failed");
> +       /* Catch any delayed write failures. */
> +       if (munmap(output, total_size) < 0)
> +               die("Writing kernel failed");
>
> -       close(fd);
> +       unmap_file(kernel, kern_file_size);
>
> -       /* Everything is OK */
> +       /* Everything is OK. */
>         return 0;
>  }
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub
  2022-12-15 12:38 ` [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub Evgeniy Baskov
@ 2023-03-09 16:00   ` Ard Biesheuvel
  2023-03-09 17:05     ` Evgeniy Baskov
  2023-03-09 16:49   ` Ard Biesheuvel
  2023-03-10 15:08   ` Ard Biesheuvel
  2 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-09 16:00 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Doing it that way allows setting up stricter memory attributes,
> simplifies boot code path and removes potential relocation
> of kernel image.
>
> Wire up required interfaces and minimally initialize zero page
> fields needed for it to function correctly.
>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/boot/compressed/head_32.S            |  50 ++++-
>  arch/x86/boot/compressed/head_64.S            |  58 ++++-
>  drivers/firmware/efi/Kconfig                  |   2 +
>  drivers/firmware/efi/libstub/Makefile         |   2 +-
>  .../firmware/efi/libstub/x86-extract-direct.c | 208 ++++++++++++++++++
>  drivers/firmware/efi/libstub/x86-stub.c       | 119 +---------
>  drivers/firmware/efi/libstub/x86-stub.h       |  14 ++
>  7 files changed, 338 insertions(+), 115 deletions(-)
>  create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
>  create mode 100644 drivers/firmware/efi/libstub/x86-stub.h
>
> diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
> index ead6007df1e5..0be75e5072ae 100644
> --- a/arch/x86/boot/compressed/head_32.S
> +++ b/arch/x86/boot/compressed/head_32.S
> @@ -152,11 +152,57 @@ SYM_FUNC_END(startup_32)
>
>  #ifdef CONFIG_EFI_STUB
>  SYM_FUNC_START(efi32_stub_entry)
> +/*
> + * Calculate the delta between where we were compiled to run
> + * at and where we were actually loaded at.  This can only be done
> + * with a short local call on x86.  Nothing  else will tell us what
> + * address we are running at.  The reserved chunk of the real-mode
> + * data at 0x1e4 (defined as a scratch field) are used as the stack
> + * for this calculation. Only 4 bytes are needed.
> + */

Please drop this comment

> +       call    1f
> +1:     popl    %ebx
> +       addl    $_GLOBAL_OFFSET_TABLE_+(.-1b), %ebx

Please drop this and ...

> +
> +       /* Clear BSS */
> +       xorl    %eax, %eax
> +       leal    _bss@GOTOFF(%ebx), %edi
> +       leal    _ebss@GOTOFF(%ebx), %ecx

just use (_bss - 1b) here (etc)

> +       subl    %edi, %ecx
> +       shrl    $2, %ecx
> +       rep     stosl
> +
>         add     $0x4, %esp
>         movl    8(%esp), %esi   /* save boot_params pointer */
> +       movl    %edx, %edi      /* save GOT address */

What does this do?

>         call    efi_main
> -       /* efi_main returns the possibly relocated address of startup_32 */
> -       jmp     *%eax
> +       movl    %eax, %ecx
> +
> +       /*
> +        * efi_main returns the possibly
> +        * relocated address of extracted kernel entry point.
> +        */
> +
> +       cli
> +
> +       /* Load new GDT */
> +       leal    gdt@GOTOFF(%ebx), %eax
> +       movl    %eax, 2(%eax)
> +       lgdt    (%eax)
> +
> +       /* Load segment registers with our descriptors */
> +       movl    $__BOOT_DS, %eax
> +       movl    %eax, %ds
> +       movl    %eax, %es
> +       movl    %eax, %fs
> +       movl    %eax, %gs
> +       movl    %eax, %ss
> +
> +       /* Zero EFLAGS */
> +       pushl   $0
> +       popfl
> +
> +       jmp     *%ecx
>  SYM_FUNC_END(efi32_stub_entry)
>  SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
>  #endif
...

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 19/26] x86/build: Cleanup tools/build.c
  2023-03-09 15:57   ` Ard Biesheuvel
@ 2023-03-09 16:25     ` Evgeniy Baskov
  2023-03-09 16:50       ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-09 16:25 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-09 18:57, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Use newer C standard. Since kernel requires C99 compiler now,
>> we can make use of the new features to make the core more readable.
>> 
>> Use mmap() for reading files also to make things simpler.
>> 
>> Replace most magic numbers with defines.
>> 
>> Should have no functional changes. This is done in preparation for the
>> next changes that makes generated PE header more spec compliant.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> ---
>>  arch/x86/boot/tools/build.c | 387 
>> +++++++++++++++++++++++-------------
>>  1 file changed, 245 insertions(+), 142 deletions(-)
>> 
>> diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
>> index bd247692b701..fbc5315af032 100644
>> --- a/arch/x86/boot/tools/build.c
>> +++ b/arch/x86/boot/tools/build.c
>> @@ -25,20 +25,21 @@
>>   * Substantially overhauled by H. Peter Anvin, April 2007
>>   */
>> 
>> +#include <fcntl.h>
>> +#include <stdarg.h>
>> +#include <stdint.h>
>>  #include <stdio.h>
>> -#include <string.h>
>>  #include <stdlib.h>
>> -#include <stdarg.h>
>> -#include <sys/types.h>
>> +#include <string.h>
>> +#include <sys/mman.h>
>>  #include <sys/stat.h>
>> +#include <sys/types.h>
>>  #include <unistd.h>
>> -#include <fcntl.h>
>> -#include <sys/mman.h>
>> +
>>  #include <tools/le_byteshift.h>
>> +#include <linux/pe.h>
>> 
>> -typedef unsigned char  u8;
>> -typedef unsigned short u16;
>> -typedef unsigned int   u32;
>> +#define round_up(x, n) (((x) + (n) - 1) & ~((n) - 1))
>> 
>>  #define DEFAULT_MAJOR_ROOT 0
>>  #define DEFAULT_MINOR_ROOT 0
>> @@ -48,8 +49,13 @@ typedef unsigned int   u32;
>>  #define SETUP_SECT_MIN 5
>>  #define SETUP_SECT_MAX 64
>> 
>> +#define PARAGRAPH_SIZE 16
>> +#define SECTOR_SIZE 512
>> +#define FILE_ALIGNMENT 512
>> +#define SECTION_ALIGNMENT 4096
>> +
>>  /* This must be large enough to hold the entire setup */
>> -u8 buf[SETUP_SECT_MAX*512];
>> +uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
>> 
>>  #define PECOFF_RELOC_RESERVE 0x20
>> 
>> @@ -59,6 +65,52 @@ u8 buf[SETUP_SECT_MAX*512];
>>  #define PECOFF_COMPAT_RESERVE 0x0
>>  #endif
>> 
>> +#define RELOC_SECTION_SIZE 10
>> +
>> +/* PE header has different format depending on the architecture */
>> +#ifdef CONFIG_X86_64
>> +typedef struct pe32plus_opt_hdr pe_opt_hdr;
>> +#else
>> +typedef struct pe32_opt_hdr pe_opt_hdr;
>> +#endif
>> +
>> +static inline struct pe_hdr *get_pe_header(uint8_t *buf)
>> +{
>> +       uint32_t pe_offset = 
>> get_unaligned_le32(buf+MZ_HEADER_PEADDR_OFFSET);
>> +       return (struct pe_hdr *)(buf + pe_offset);
>> +}
>> +
>> +static inline pe_opt_hdr *get_pe_opt_header(uint8_t *buf)
>> +{
>> +       return (pe_opt_hdr *)(get_pe_header(buf) + 1);
>> +}
>> +
>> +static inline struct section_header *get_sections(uint8_t *buf)
>> +{
>> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>> +       uint32_t n_data_dirs = get_unaligned_le32(&hdr->data_dirs);
>> +       uint8_t *sections = (uint8_t *)(hdr + 1) + 
>> n_data_dirs*sizeof(struct data_dirent);
>> +       return  (struct section_header *)sections;
>> +}
>> +
>> +static inline struct data_directory *get_data_dirs(uint8_t *buf)
>> +{
>> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>> +       return (struct data_directory *)(hdr + 1);
>> +}
>> +
>> +#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
> 
> Can we drop this conditional?

Without CONFIG_EFI_DXE_MEM_ATTRIBUTES memory attributes are not
getting applies anywhere, so this would break 'nokaslr' on UEFI
implementations that honor section attributes.

KASLR is already broken without that option on implementations
that disallow execution of the free memory though. But unlike
free memory, sections are more likely to get protected, I think.

>> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | 
>> IMAGE_SCN_ALIGN_4096BYTES)
>> +#define SCN_RX (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_EXECUTE | 
>> IMAGE_SCN_ALIGN_4096BYTES)
>> +#define SCN_RO (IMAGE_SCN_MEM_READ | IMAGE_SCN_ALIGN_4096BYTES)
> 
> Please drop the alignment flags - they don't apply to executable only
> object files.

Got it, will remove them in v5.

> 
>> +#else
>> +/* With memory protection disabled all sections are RWX */
>> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | \
>> +               IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
>> +#define SCN_RX SCN_RW
>> +#define SCN_RO SCN_RW
>> +#endif
>> +
>>  static unsigned long efi32_stub_entry;
>>  static unsigned long efi64_stub_entry;
>>  static unsigned long efi_pe_entry;
>> @@ -70,7 +122,7 @@ static unsigned long _end;
>> 
>>  
>> /*----------------------------------------------------------------------*/
>> 
>> -static const u32 crctab32[] = {
>> +static const uint32_t crctab32[] = {
> 
> Replacing all the type names makes this patch very messy. Can we back
> that out please?

Ok, I will revert them.

> 
>>         0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
>>         0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
>>         0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
>> @@ -125,12 +177,12 @@ static const u32 crctab32[] = {
>>         0x2d02ef8d
>>  };
>> 
>> -static u32 partial_crc32_one(u8 c, u32 crc)
>> +static uint32_t partial_crc32_one(uint8_t c, uint32_t crc)
>>  {
>>         return crctab32[(crc ^ c) & 0xff] ^ (crc >> 8);
>>  }
>> 
>> -static u32 partial_crc32(const u8 *s, int len, u32 crc)
>> +static uint32_t partial_crc32(const uint8_t *s, int len, uint32_t 
>> crc)
>>  {
>>         while (len--)
>>                 crc = partial_crc32_one(*s++, crc);
>> @@ -152,57 +204,106 @@ static void usage(void)
>>         die("Usage: build setup system zoffset.h image");
>>  }
>> 
>> +static void *map_file(const char *path, size_t *psize)
>> +{
>> +       struct stat statbuf;
>> +       size_t size;
>> +       void *addr;
>> +       int fd;
>> +
>> +       fd = open(path, O_RDONLY);
>> +       if (fd < 0)
>> +               die("Unable to open `%s': %m", path);
>> +       if (fstat(fd, &statbuf))
>> +               die("Unable to stat `%s': %m", path);
>> +
>> +       size = statbuf.st_size;
>> +       /*
>> +        * Map one byte more, to allow adding null-terminator
>> +        * for text files.
>> +        */
>> +       addr = mmap(NULL, size + 1, PROT_READ | PROT_WRITE, 
>> MAP_PRIVATE, fd, 0);
>> +       if (addr == MAP_FAILED)
>> +               die("Unable to mmap '%s': %m", path);
>> +
>> +       close(fd);
>> +
>> +       *psize = size;
>> +       return addr;
>> +}
>> +
>> +static void unmap_file(void *addr, size_t size)
>> +{
>> +       munmap(addr, size + 1);
>> +}
>> +
>> +static void *map_output_file(const char *path, size_t size)
>> +{
>> +       void *addr;
>> +       int fd;
>> +
>> +       fd = open(path, O_RDWR | O_CREAT, 0660);
>> +       if (fd < 0)
>> +               die("Unable to create `%s': %m", path);
>> +
>> +       if (ftruncate(fd, size))
>> +               die("Unable to resize `%s': %m", path);
>> +
>> +       addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, 
>> fd, 0);
>> +       if (addr == MAP_FAILED)
>> +               die("Unable to mmap '%s': %m", path);
>> +
>> +       return addr;
>> +}
>> +
>>  #ifdef CONFIG_EFI_STUB
>> 
>> -static void update_pecoff_section_header_fields(char *section_name, 
>> u32 vma, u32 size, u32 datasz, u32 offset)
>> +static void update_pecoff_section_header_fields(char *section_name, 
>> uint32_t vma,
>> +                                               uint32_t size, 
>> uint32_t datasz,
>> +                                               uint32_t offset)
>>  {
>>         unsigned int pe_header;
>>         unsigned short num_sections;
>> -       u8 *section;
>> +       struct section_header *section;
>> 
>> -       pe_header = get_unaligned_le32(&buf[0x3c]);
>> -       num_sections = get_unaligned_le16(&buf[pe_header + 6]);
>> -
>> -#ifdef CONFIG_X86_32
>> -       section = &buf[pe_header + 0xa8];
>> -#else
>> -       section = &buf[pe_header + 0xb8];
>> -#endif
>> +       struct pe_hdr *hdr = get_pe_header(buf);
>> +       num_sections = get_unaligned_le16(&hdr->sections);
>> +       section = get_sections(buf);
>> 
>>         while (num_sections > 0) {
>> -               if (strncmp((char*)section, section_name, 8) == 0) {
>> +               if (strncmp(section->name, section_name, 8) == 0) {
>>                         /* section header size field */
>> -                       put_unaligned_le32(size, section + 0x8);
>> +                       put_unaligned_le32(size, 
>> &section->virtual_size);
>> 
>>                         /* section header vma field */
>> -                       put_unaligned_le32(vma, section + 0xc);
>> +                       put_unaligned_le32(vma, 
>> &section->virtual_address);
>> 
>>                         /* section header 'size of initialised data' 
>> field */
>> -                       put_unaligned_le32(datasz, section + 0x10);
>> +                       put_unaligned_le32(datasz, 
>> &section->raw_data_size);
>> 
>>                         /* section header 'file offset' field */
>> -                       put_unaligned_le32(offset, section + 0x14);
>> +                       put_unaligned_le32(offset, 
>> &section->data_addr);
>> 
>>                         break;
>>                 }
>> -               section += 0x28;
>> +               section++;
>>                 num_sections--;
>>         }
>>  }
>> 
>> -static void update_pecoff_section_header(char *section_name, u32 
>> offset, u32 size)
>> +static void update_pecoff_section_header(char *section_name, uint32_t 
>> offset, uint32_t size)
>>  {
>>         update_pecoff_section_header_fields(section_name, offset, 
>> size, size, offset);
>>  }
>> 
>>  static void update_pecoff_setup_and_reloc(unsigned int size)
>>  {
>> -       u32 setup_offset = 0x200;
>> -       u32 reloc_offset = size - PECOFF_RELOC_RESERVE - 
>> PECOFF_COMPAT_RESERVE;
>> +       uint32_t setup_offset = SECTOR_SIZE;
>> +       uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE - 
>> PECOFF_COMPAT_RESERVE;
>>  #ifdef CONFIG_EFI_MIXED
>> -       u32 compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
>> +       uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
>>  #endif
>> -       u32 setup_size = reloc_offset - setup_offset;
>> +       uint32_t setup_size = reloc_offset - setup_offset;
>> 
>>         update_pecoff_section_header(".setup", setup_offset, 
>> setup_size);
>>         update_pecoff_section_header(".reloc", reloc_offset, 
>> PECOFF_RELOC_RESERVE);
>> @@ -211,8 +312,8 @@ static void update_pecoff_setup_and_reloc(unsigned 
>> int size)
>>          * Modify .reloc section contents with a single entry. The
>>          * relocation is applied to offset 10 of the relocation 
>> section.
>>          */
>> -       put_unaligned_le32(reloc_offset + 10, &buf[reloc_offset]);
>> -       put_unaligned_le32(10, &buf[reloc_offset + 4]);
>> +       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, 
>> &buf[reloc_offset]);
>> +       put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset + 
>> 4]);
>> 
>>  #ifdef CONFIG_EFI_MIXED
>>         update_pecoff_section_header(".compat", compat_offset, 
>> PECOFF_COMPAT_RESERVE);
>> @@ -224,19 +325,17 @@ static void 
>> update_pecoff_setup_and_reloc(unsigned int size)
>>          */
>>         buf[compat_offset] = 0x1;
>>         buf[compat_offset + 1] = 0x8;
>> -       put_unaligned_le16(0x14c, &buf[compat_offset + 2]);
>> +       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset 
>> + 2]);
>>         put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset + 
>> 4]);
>>  #endif
>>  }
>> 
>> -static void update_pecoff_text(unsigned int text_start, unsigned int 
>> file_sz,
>> +static unsigned int update_pecoff_sections(unsigned int text_start, 
>> unsigned int text_sz,
>>                                unsigned int init_sz)
>>  {
>> -       unsigned int pe_header;
>> -       unsigned int text_sz = file_sz - text_start;
>> +       unsigned int file_sz = text_start + text_sz;
>>         unsigned int bss_sz = init_sz - file_sz;
>> -
>> -       pe_header = get_unaligned_le32(&buf[0x3c]);
>> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>> 
>>         /*
>>          * The PE/COFF loader may load the image at an address which 
>> is
>> @@ -254,18 +353,20 @@ static void update_pecoff_text(unsigned int 
>> text_start, unsigned int file_sz,
>>          * Size of code: Subtract the size of the first sector (512 
>> bytes)
>>          * which includes the header.
>>          */
>> -       put_unaligned_le32(file_sz - 512 + bss_sz, &buf[pe_header + 
>> 0x1c]);
>> +       put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz, 
>> &hdr->text_size);
>> 
>>         /* Size of image */
>> -       put_unaligned_le32(init_sz, &buf[pe_header + 0x50]);
>> +       put_unaligned_le32(init_sz, &hdr->image_size);
>> 
>>         /*
>>          * Address of entry point for PE/COFF executable
>>          */
>> -       put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 
>> 0x28]);
>> +       put_unaligned_le32(text_start + efi_pe_entry, 
>> &hdr->entry_point);
>> 
>>         update_pecoff_section_header_fields(".text", text_start, 
>> text_sz + bss_sz,
>>                                             text_sz, text_start);
>> +
>> +       return text_start + file_sz;
>>  }
>> 
>>  static int reserve_pecoff_reloc_section(int c)
>> @@ -275,7 +376,7 @@ static int reserve_pecoff_reloc_section(int c)
>>         return PECOFF_RELOC_RESERVE;
>>  }
>> 
>> -static void efi_stub_defaults(void)
>> +static void efi_stub_update_defaults(void)
>>  {
>>         /* Defaults for old kernel */
>>  #ifdef CONFIG_X86_32
>> @@ -298,7 +399,7 @@ static void efi_stub_entry_update(void)
>> 
>>  #ifdef CONFIG_EFI_MIXED
>>         if (efi32_stub_entry != addr)
>> -               die("32-bit and 64-bit EFI entry points do not 
>> match\n");
>> +               die("32-bit and 64-bit EFI entry points do not 
>> match");
>>  #endif
>>  #endif
>>         put_unaligned_le32(addr, &buf[0x264]);
>> @@ -310,7 +411,7 @@ static inline void 
>> update_pecoff_setup_and_reloc(unsigned int size) {}
>>  static inline void update_pecoff_text(unsigned int text_start,
>>                                       unsigned int file_sz,
>>                                       unsigned int init_sz) {}
>> -static inline void efi_stub_defaults(void) {}
>> +static inline void efi_stub_update_defaults(void) {}
>>  static inline void efi_stub_entry_update(void) {}
>> 
>>  static inline int reserve_pecoff_reloc_section(int c)
>> @@ -338,20 +439,15 @@ static int reserve_pecoff_compat_section(int c)
>> 
>>  static void parse_zoffset(char *fname)
>>  {
>> -       FILE *file;
>> -       char *p;
>> -       int c;
>> +       size_t size;
>> +       char *data, *p;
>> 
>> -       file = fopen(fname, "r");
>> -       if (!file)
>> -               die("Unable to open `%s': %m", fname);
>> -       c = fread(buf, 1, sizeof(buf) - 1, file);
>> -       if (ferror(file))
>> -               die("read-error on `zoffset.h'");
>> -       fclose(file);
>> -       buf[c] = 0;
>> +       data = map_file(fname, &size);
>> 
>> -       p = (char *)buf;
>> +       /* We can do that, since we mapped one byte more */
>> +       data[size] = 0;
>> +
>> +       p = (char *)data;
>> 
>>         while (p && *p) {
>>                 PARSE_ZOFS(p, efi32_stub_entry);
>> @@ -367,82 +463,99 @@ static void parse_zoffset(char *fname)
>>                 while (p && (*p == '\r' || *p == '\n'))
>>                         p++;
>>         }
>> +
>> +       unmap_file(data, size);
>>  }
>> 
>> -int main(int argc, char ** argv)
>> +static unsigned int read_setup(char *path)
>>  {
>> -       unsigned int i, sz, setup_sectors, init_sz;
>> -       int c;
>> -       u32 sys_size;
>> -       struct stat sb;
>> -       FILE *file, *dest;
>> -       int fd;
>> -       void *kernel;
>> -       u32 crc = 0xffffffffUL;
>> -
>> -       efi_stub_defaults();
>> -
>> -       if (argc != 5)
>> -               usage();
>> -       parse_zoffset(argv[3]);
>> -
>> -       dest = fopen(argv[4], "w");
>> -       if (!dest)
>> -               die("Unable to write `%s': %m", argv[4]);
>> +       FILE *file;
>> +       unsigned int setup_size, file_size;
>> 
>>         /* Copy the setup code */
>> -       file = fopen(argv[1], "r");
>> +       file = fopen(path, "r");
>>         if (!file)
>> -               die("Unable to open `%s': %m", argv[1]);
>> -       c = fread(buf, 1, sizeof(buf), file);
>> +               die("Unable to open `%s': %m", path);
>> +
>> +       file_size = fread(buf, 1, sizeof(buf), file);
>>         if (ferror(file))
>>                 die("read-error on `setup'");
>> -       if (c < 1024)
>> +
>> +       if (file_size < 2 * SECTOR_SIZE)
>>                 die("The setup must be at least 1024 bytes");
>> -       if (get_unaligned_le16(&buf[510]) != 0xAA55)
>> +
>> +       if (get_unaligned_le16(&buf[SECTOR_SIZE - 2]) != 0xAA55)
>>                 die("Boot block hasn't got boot flag (0xAA55)");
>> +
>>         fclose(file);
>> 
>> -       c += reserve_pecoff_compat_section(c);
>> -       c += reserve_pecoff_reloc_section(c);
>> +       /* Reserve space for PE sections */
>> +       file_size += reserve_pecoff_compat_section(file_size);
>> +       file_size += reserve_pecoff_reloc_section(file_size);
>> 
>>         /* Pad unused space with zeros */
>> -       setup_sectors = (c + 511) / 512;
>> -       if (setup_sectors < SETUP_SECT_MIN)
>> -               setup_sectors = SETUP_SECT_MIN;
>> -       i = setup_sectors*512;
>> -       memset(buf+c, 0, i-c);
>> 
>> -       update_pecoff_setup_and_reloc(i);
>> +       setup_size = round_up(file_size, SECTOR_SIZE);
>> +
>> +       if (setup_size < SETUP_SECT_MIN * SECTOR_SIZE)
>> +               setup_size = SETUP_SECT_MIN * SECTOR_SIZE;
>> +
>> +       /*
>> +        * Global buffer is already initialised
>> +        * to 0, but just in case, zero out padding.
>> +        */
>> +
>> +       memset(buf + file_size, 0, setup_size - file_size);
>> +
>> +       return setup_size;
>> +}
>> +
>> +int main(int argc, char **argv)
>> +{
>> +       size_t kern_file_size;
>> +       unsigned int setup_size;
>> +       unsigned int setup_sectors;
>> +       unsigned int init_size;
>> +       unsigned int total_size;
>> +       unsigned int kern_size;
>> +       void *kernel;
>> +       uint32_t crc = 0xffffffffUL;
>> +       uint8_t *output;
>> +
>> +       if (argc != 5)
>> +               usage();
>> +
>> +       efi_stub_update_defaults();
>> +       parse_zoffset(argv[3]);
>> +
>> +       setup_size = read_setup(argv[1]);
>> +
>> +       setup_sectors = setup_size/SECTOR_SIZE;
>> 
>>         /* Set the default root device */
>>         put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]);
>> 
>> -       /* Open and stat the kernel file */
>> -       fd = open(argv[2], O_RDONLY);
>> -       if (fd < 0)
>> -               die("Unable to open `%s': %m", argv[2]);
>> -       if (fstat(fd, &sb))
>> -               die("Unable to stat `%s': %m", argv[2]);
>> -       sz = sb.st_size;
>> -       kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0);
>> -       if (kernel == MAP_FAILED)
>> -               die("Unable to mmap '%s': %m", argv[2]);
>> -       /* Number of 16-byte paragraphs, including space for a 4-byte 
>> CRC */
>> -       sys_size = (sz + 15 + 4) / 16;
>> +       /* Map kernel file to memory */
>> +       kernel = map_file(argv[2], &kern_file_size);
>> +
>>  #ifdef CONFIG_EFI_STUB
>> -       /*
>> -        * COFF requires minimum 32-byte alignment of sections, and
>> -        * adding a signature is problematic without that alignment.
>> -        */
>> -       sys_size = (sys_size + 1) & ~1;
>> +       /* PE specification require 512-byte minimum section file 
>> alignment */
>> +       kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
>> +       update_pecoff_setup_and_reloc(setup_size);
>> +#else
>> +       /* Number of 16-byte paragraphs, including space for a 4-byte 
>> CRC */
>> +       kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
>>  #endif
>> 
>>         /* Patch the setup code with the appropriate size parameters 
>> */
>> -       buf[0x1f1] = setup_sectors-1;
>> -       put_unaligned_le32(sys_size, &buf[0x1f4]);
>> +       buf[0x1f1] = setup_sectors - 1;
>> +       put_unaligned_le32(kern_size/PARAGRAPH_SIZE, &buf[0x1f4]);
>> +
>> +       /* Update kernel_info offset. */
>> +       put_unaligned_le32(kernel_info, &buf[0x268]);
>> +
>> +       init_size = get_unaligned_le32(&buf[0x260]);
>> 
>> -       init_sz = get_unaligned_le32(&buf[0x260]);
>>  #ifdef CONFIG_EFI_STUB
>>         /*
>>          * The decompression buffer will start at ImageBase. When 
>> relocating
>> @@ -458,45 +571,35 @@ int main(int argc, char ** argv)
>>          * For future-proofing, increase init_sz if necessary.
>>          */
>> 
>> -       if (init_sz - _end < i + _ehead) {
>> -               init_sz = (i + _ehead + _end + 4095) & ~4095;
>> -               put_unaligned_le32(init_sz, &buf[0x260]);
>> +       if (init_size - _end < setup_size + _ehead) {
>> +               init_size = round_up(setup_size + _ehead + _end, 
>> SECTION_ALIGNMENT);
>> +               put_unaligned_le32(init_size, &buf[0x260]);
>>         }
>> -#endif
>> -       update_pecoff_text(setup_sectors * 512, i + (sys_size * 16), 
>> init_sz);
>> 
>> -       efi_stub_entry_update();
>> -
>> -       /* Update kernel_info offset. */
>> -       put_unaligned_le32(kernel_info, &buf[0x268]);
>> +       total_size = update_pecoff_sections(setup_size, kern_size, 
>> init_size);
>> 
>> -       crc = partial_crc32(buf, i, crc);
>> -       if (fwrite(buf, 1, i, dest) != i)
>> -               die("Writing setup failed");
>> +       efi_stub_entry_update();
>> +#else
>> +       (void)init_size;
>> +       total_size = setup_size + kern_size;
>> +#endif
>> 
>> -       /* Copy the kernel code */
>> -       crc = partial_crc32(kernel, sz, crc);
>> -       if (fwrite(kernel, 1, sz, dest) != sz)
>> -               die("Writing kernel failed");
>> +       output = map_output_file(argv[4], total_size);
>> 
>> -       /* Add padding leaving 4 bytes for the checksum */
>> -       while (sz++ < (sys_size*16) - 4) {
>> -               crc = partial_crc32_one('\0', crc);
>> -               if (fwrite("\0", 1, 1, dest) != 1)
>> -                       die("Writing padding failed");
>> -       }
>> +       memcpy(output, buf, setup_size);
>> +       memcpy(output + setup_size, kernel, kern_file_size);
>> +       memset(output + setup_size + kern_file_size, 0, kern_size - 
>> kern_file_size);
>> 
>> -       /* Write the CRC */
>> -       put_unaligned_le32(crc, buf);
>> -       if (fwrite(buf, 1, 4, dest) != 4)
>> -               die("Writing CRC failed");
>> +       /* Calculate and write kernel checksum. */
>> +       crc = partial_crc32(output, total_size - 4, crc);
>> +       put_unaligned_le32(crc, &output[total_size - 4]);
>> 
>> -       /* Catch any delayed write failures */
>> -       if (fclose(dest))
>> -               die("Writing image failed");
>> +       /* Catch any delayed write failures. */
>> +       if (munmap(output, total_size) < 0)
>> +               die("Writing kernel failed");
>> 
>> -       close(fd);
>> +       unmap_file(kernel, kern_file_size);
>> 
>> -       /* Everything is OK */
>> +       /* Everything is OK. */
>>         return 0;
>>  }
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub
  2022-12-15 12:38 ` [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub Evgeniy Baskov
  2023-03-09 16:00   ` Ard Biesheuvel
@ 2023-03-09 16:49   ` Ard Biesheuvel
  2023-03-09 17:10     ` Evgeniy Baskov
  2023-03-10 15:08   ` Ard Biesheuvel
  2 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-09 16:49 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Doing it that way allows setting up stricter memory attributes,
> simplifies boot code path and removes potential relocation
> of kernel image.
>
> Wire up required interfaces and minimally initialize zero page
> fields needed for it to function correctly.
>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

OK I just realized that there is a problem with this approach: since
we now decompress the image while running in the EFI stub (i.e.,
before ExitBootServices()), we cannot just randomly pick a
EFI_CONVENTIONAL_MEMORY region to place the kernel, we need to
allocate the pages using the boot services. Otherwise, subsequent
allocations (or concurrent ones occurring in the firmware in event
handlers etc) may land right in the middle, which is unlikely to be
what we want.


> ---
>  arch/x86/boot/compressed/head_32.S            |  50 ++++-
>  arch/x86/boot/compressed/head_64.S            |  58 ++++-
>  drivers/firmware/efi/Kconfig                  |   2 +
>  drivers/firmware/efi/libstub/Makefile         |   2 +-
>  .../firmware/efi/libstub/x86-extract-direct.c | 208 ++++++++++++++++++
>  drivers/firmware/efi/libstub/x86-stub.c       | 119 +---------
>  drivers/firmware/efi/libstub/x86-stub.h       |  14 ++
>  7 files changed, 338 insertions(+), 115 deletions(-)
>  create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
>  create mode 100644 drivers/firmware/efi/libstub/x86-stub.h
>
> diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
> index ead6007df1e5..0be75e5072ae 100644
> --- a/arch/x86/boot/compressed/head_32.S
> +++ b/arch/x86/boot/compressed/head_32.S
> @@ -152,11 +152,57 @@ SYM_FUNC_END(startup_32)
>
>  #ifdef CONFIG_EFI_STUB
>  SYM_FUNC_START(efi32_stub_entry)
> +/*
> + * Calculate the delta between where we were compiled to run
> + * at and where we were actually loaded at.  This can only be done
> + * with a short local call on x86.  Nothing  else will tell us what
> + * address we are running at.  The reserved chunk of the real-mode
> + * data at 0x1e4 (defined as a scratch field) are used as the stack
> + * for this calculation. Only 4 bytes are needed.
> + */
> +       call    1f
> +1:     popl    %ebx
> +       addl    $_GLOBAL_OFFSET_TABLE_+(.-1b), %ebx
> +
> +       /* Clear BSS */
> +       xorl    %eax, %eax
> +       leal    _bss@GOTOFF(%ebx), %edi
> +       leal    _ebss@GOTOFF(%ebx), %ecx
> +       subl    %edi, %ecx
> +       shrl    $2, %ecx
> +       rep     stosl
> +
>         add     $0x4, %esp
>         movl    8(%esp), %esi   /* save boot_params pointer */
> +       movl    %edx, %edi      /* save GOT address */
>         call    efi_main
> -       /* efi_main returns the possibly relocated address of startup_32 */
> -       jmp     *%eax
> +       movl    %eax, %ecx
> +
> +       /*
> +        * efi_main returns the possibly
> +        * relocated address of extracted kernel entry point.
> +        */
> +
> +       cli
> +
> +       /* Load new GDT */
> +       leal    gdt@GOTOFF(%ebx), %eax
> +       movl    %eax, 2(%eax)
> +       lgdt    (%eax)
> +
> +       /* Load segment registers with our descriptors */
> +       movl    $__BOOT_DS, %eax
> +       movl    %eax, %ds
> +       movl    %eax, %es
> +       movl    %eax, %fs
> +       movl    %eax, %gs
> +       movl    %eax, %ss
> +
> +       /* Zero EFLAGS */
> +       pushl   $0
> +       popfl
> +
> +       jmp     *%ecx
>  SYM_FUNC_END(efi32_stub_entry)
>  SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
>  #endif
> diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
> index 2dd8be0583d2..7cfef7bd0424 100644
> --- a/arch/x86/boot/compressed/head_64.S
> +++ b/arch/x86/boot/compressed/head_64.S
> @@ -529,12 +529,64 @@ SYM_CODE_END(startup_64)
>         .org 0x390
>  #endif
>  SYM_FUNC_START(efi64_stub_entry)
> +       /* Preserve first parameter */
> +       movq    %rdi, %r10
> +
> +       /* Clear BSS */
> +       xorl    %eax, %eax
> +       leaq    _bss(%rip), %rdi
> +       leaq    _ebss(%rip), %rcx
> +       subq    %rdi, %rcx
> +       shrq    $3, %rcx
> +       rep     stosq
> +
>         and     $~0xf, %rsp                     /* realign the stack */
>         movq    %rdx, %rbx                      /* save boot_params pointer */
> +       movq    %r10, %rdi
>         call    efi_main
> -       movq    %rbx,%rsi
> -       leaq    rva(startup_64)(%rax), %rax
> -       jmp     *%rax
> +
> +       cld
> +       cli
> +
> +       movq    %rbx, %rdi /* boot_params */
> +       movq    %rax, %rsi /* decompressed kernel address */
> +
> +       /* Make sure we have GDT with 32-bit code segment */
> +       leaq    gdt64(%rip), %rax
> +       addq    %rax, 2(%rax)
> +       lgdt    (%rax)
> +
> +       /* Setup data segments. */
> +       xorl    %eax, %eax
> +       movl    %eax, %ds
> +       movl    %eax, %es
> +       movl    %eax, %ss
> +       movl    %eax, %fs
> +       movl    %eax, %gs
> +
> +       pushq   %rsi
> +       pushq   %rdi
> +
> +       call    load_stage1_idt
> +       call    enable_nx_if_supported
> +
> +       call    trampoline_pgtable_init
> +       movq    %rax, %rdx
> +
> +
> +       /* Swap %rsi and %rsi */
> +       popq    %rsi
> +       popq    %rdi
> +
> +       /* Save the trampoline address in RCX */
> +       movq    trampoline_32bit(%rip), %rcx
> +
> +       /* Switch to compatibility mode (CS.L = 0 CS.D = 1) via far return */
> +       pushq   $__KERNEL32_CS
> +       leaq    TRAMPOLINE_32BIT_CODE_OFFSET(%rcx), %rax
> +       pushq   %rax
> +       lretq
> +
>  SYM_FUNC_END(efi64_stub_entry)
>  SYM_FUNC_ALIAS(efi_stub_entry, efi64_stub_entry)
>  #endif
> diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
> index 043ca31c114e..f50c2a84a754 100644
> --- a/drivers/firmware/efi/Kconfig
> +++ b/drivers/firmware/efi/Kconfig
> @@ -58,6 +58,8 @@ config EFI_DXE_MEM_ATTRIBUTES
>           Use DXE services to check and alter memory protection
>           attributes during boot via EFISTUB to ensure that memory
>           ranges used by the kernel are writable and executable.
> +         This option also enables stricter memory attributes
> +         on compressed kernel PE image.
>
>  config EFI_PARAMS_FROM_FDT
>         bool
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index be8b8c6e8b40..99b81c95344c 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -88,7 +88,7 @@ lib-$(CONFIG_EFI_GENERIC_STUB)        += efi-stub.o string.o intrinsics.o systable.o \
>
>  lib-$(CONFIG_ARM)              += arm32-stub.o
>  lib-$(CONFIG_ARM64)            += arm64.o arm64-stub.o arm64-entry.o smbios.o
> -lib-$(CONFIG_X86)              += x86-stub.o
> +lib-$(CONFIG_X86)              += x86-stub.o x86-extract-direct.o
>  lib-$(CONFIG_RISCV)            += riscv.o riscv-stub.o
>  lib-$(CONFIG_LOONGARCH)                += loongarch.o loongarch-stub.o
>
> diff --git a/drivers/firmware/efi/libstub/x86-extract-direct.c b/drivers/firmware/efi/libstub/x86-extract-direct.c
> new file mode 100644
> index 000000000000..4ecbc4a9b3ed
> --- /dev/null
> +++ b/drivers/firmware/efi/libstub/x86-extract-direct.c
> @@ -0,0 +1,208 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <linux/acpi.h>
> +#include <linux/efi.h>
> +#include <linux/elf.h>
> +#include <linux/stddef.h>
> +
> +#include <asm/efi.h>
> +#include <asm/e820/types.h>
> +#include <asm/desc.h>
> +#include <asm/boot.h>
> +#include <asm/bootparam_utils.h>
> +#include <asm/shared/extract.h>
> +#include <asm/shared/pgtable.h>
> +
> +#include "efistub.h"
> +#include "x86-stub.h"
> +
> +static efi_handle_t image_handle;
> +
> +static void do_puthex(unsigned long value)
> +{
> +       efi_printk("%08lx", value);
> +}
> +
> +static void do_putstr(const char *msg)
> +{
> +       efi_printk("%s", msg);
> +}
> +
> +static unsigned long do_map_range(unsigned long start,
> +                                 unsigned long end,
> +                                 unsigned int flags)
> +{
> +       efi_status_t status;
> +
> +       unsigned long size = end - start;
> +
> +       if (flags & MAP_ALLOC) {
> +               unsigned long addr;
> +
> +               status = efi_low_alloc_above(size, CONFIG_PHYSICAL_ALIGN,
> +                                            &addr, start);
> +               if (status != EFI_SUCCESS) {
> +                       efi_err("Unable to allocate memory for uncompressed kernel");
> +                       efi_exit(image_handle, EFI_OUT_OF_RESOURCES);
> +               }
> +
> +               if (start != addr) {
> +                       efi_debug("Unable to allocate at given address"
> +                                 " (desired=0x%lx, actual=0x%lx)",
> +                                 (unsigned long)start, addr);
> +                       start = addr;
> +               }
> +       }
> +
> +       if ((flags & (MAP_PROTECT | MAP_ALLOC)) &&
> +           IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> +               unsigned long attr = 0;
> +
> +               if (!(flags & MAP_EXEC))
> +                       attr |= EFI_MEMORY_XP;
> +
> +               if (!(flags & MAP_WRITE))
> +                       attr |= EFI_MEMORY_RO;
> +
> +               status = efi_adjust_memory_range_protection(start, size, attr);
> +               if (status != EFI_SUCCESS)
> +                       efi_err("Unable to protect memory range");
> +       }
> +
> +       return start;
> +}
> +
> +/*
> + * Trampoline takes 3 pages and can be loaded in first megabyte of memory
> + * with its end placed between 0 and 640k where BIOS might start.
> + * (see arch/x86/boot/compressed/pgtable_64.c)
> + */
> +
> +#ifdef CONFIG_64BIT
> +static efi_status_t prepare_trampoline(void)
> +{
> +       efi_status_t status;
> +
> +       status = efi_allocate_pages(TRAMPOLINE_32BIT_SIZE,
> +                                   (unsigned long *)&trampoline_32bit,
> +                                   TRAMPOLINE_32BIT_PLACEMENT_MAX);
> +
> +       if (status != EFI_SUCCESS)
> +               return status;
> +
> +       unsigned long trampoline_start = (unsigned long)trampoline_32bit;
> +
> +       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
> +
> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> +               /* First page of trampoline is a top level page table */
> +               efi_adjust_memory_range_protection(trampoline_start,
> +                                                  PAGE_SIZE,
> +                                                  EFI_MEMORY_XP);
> +       }
> +
> +       /* Second page of trampoline is the code (with a padding) */
> +
> +       void *caddr = (void *)trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET;
> +
> +       memcpy(caddr, trampoline_32bit_src, TRAMPOLINE_32BIT_CODE_SIZE);
> +
> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> +               efi_adjust_memory_range_protection((unsigned long)caddr,
> +                                                  PAGE_SIZE,
> +                                                  EFI_MEMORY_RO);
> +
> +               /* And the last page of trampoline is the stack */
> +
> +               efi_adjust_memory_range_protection(trampoline_start + 2 * PAGE_SIZE,
> +                                                  PAGE_SIZE,
> +                                                  EFI_MEMORY_XP);
> +       }
> +
> +       return EFI_SUCCESS;
> +}
> +#else
> +static inline efi_status_t prepare_trampoline(void)
> +{
> +       return EFI_SUCCESS;
> +}
> +#endif
> +
> +static efi_status_t init_loader_data(efi_handle_t handle,
> +                                    struct boot_params *params,
> +                                    struct efi_boot_memmap **map)
> +{
> +       struct efi_info *efi = (void *)&params->efi_info;
> +       efi_status_t status;
> +
> +       status = efi_get_memory_map(map, false);
> +
> +       if (status != EFI_SUCCESS) {
> +               efi_err("Unable to get EFI memory map...\n");
> +               return status;
> +       }
> +
> +       const char *signature = efi_is_64bit() ? EFI64_LOADER_SIGNATURE
> +                                              : EFI32_LOADER_SIGNATURE;
> +
> +       memcpy(&efi->efi_loader_signature, signature, sizeof(__u32));
> +
> +       efi->efi_memdesc_size = (*map)->desc_size;
> +       efi->efi_memdesc_version = (*map)->desc_ver;
> +       efi->efi_memmap_size = (*map)->map_size;
> +
> +       efi_set_u64_split((unsigned long)(*map)->map,
> +                         &efi->efi_memmap, &efi->efi_memmap_hi);
> +
> +       efi_set_u64_split((unsigned long)efi_system_table,
> +                         &efi->efi_systab, &efi->efi_systab_hi);
> +
> +       image_handle = handle;
> +
> +       return EFI_SUCCESS;
> +}
> +
> +static void free_loader_data(struct boot_params *params, struct efi_boot_memmap *map)
> +{
> +       struct efi_info *efi = (void *)&params->efi_info;
> +
> +       efi_bs_call(free_pool, map);
> +
> +       efi->efi_memdesc_size = 0;
> +       efi->efi_memdesc_version = 0;
> +       efi->efi_memmap_size = 0;
> +       efi_set_u64_split(0, &efi->efi_memmap, &efi->efi_memmap_hi);
> +}
> +
> +extern unsigned char input_data[];
> +extern unsigned int input_len, output_len;
> +
> +unsigned long extract_kernel_direct(efi_handle_t handle, struct boot_params *params)
> +{
> +
> +       void *res;
> +       efi_status_t status;
> +       struct efi_extract_callbacks cb = { 0 };
> +
> +       status = prepare_trampoline();
> +
> +       if (status != EFI_SUCCESS)
> +               return 0;
> +
> +       /* Prepare environment for do_extract_kernel() call */
> +       struct efi_boot_memmap *map = NULL;
> +       status = init_loader_data(handle, params, &map);
> +
> +       if (status != EFI_SUCCESS)
> +               return 0;
> +
> +       cb.puthex = do_puthex;
> +       cb.putstr = do_putstr;
> +       cb.map_range = do_map_range;
> +
> +       res = efi_extract_kernel(params, &cb, input_data, input_len, output_len);
> +
> +       free_loader_data(params, map);
> +
> +       return (unsigned long)res;
> +}
> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
> index 7fb1eff88a18..1d1ab1911fd3 100644
> --- a/drivers/firmware/efi/libstub/x86-stub.c
> +++ b/drivers/firmware/efi/libstub/x86-stub.c
> @@ -17,6 +17,7 @@
>  #include <asm/boot.h>
>
>  #include "efistub.h"
> +#include "x86-stub.h"
>
>  /* Maximum physical address for 64-bit kernel with 4-level paging */
>  #define MAXMEM_X86_64_4LEVEL (1ull << 46)
> @@ -24,7 +25,7 @@
>  const efi_system_table_t *efi_system_table;
>  const efi_dxe_services_table_t *efi_dxe_table;
>  u32 image_offset __section(".data");
> -static efi_loaded_image_t *image = NULL;
> +static efi_loaded_image_t *image __section(".data");
>
>  static efi_status_t
>  preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom)
> @@ -212,55 +213,9 @@ static void retrieve_apple_device_properties(struct boot_params *boot_params)
>         }
>  }
>
> -/*
> - * Trampoline takes 2 pages and can be loaded in first megabyte of memory
> - * with its end placed between 128k and 640k where BIOS might start.
> - * (see arch/x86/boot/compressed/pgtable_64.c)
> - *
> - * We cannot find exact trampoline placement since memory map
> - * can be modified by UEFI, and it can alter the computed address.
> - */
> -
> -#define TRAMPOLINE_PLACEMENT_BASE ((128 - 8)*1024)
> -#define TRAMPOLINE_PLACEMENT_SIZE (640*1024 - (128 - 8)*1024)
> -
> -void startup_32(struct boot_params *boot_params);
> -
> -static void
> -setup_memory_protection(unsigned long image_base, unsigned long image_size)
> -{
> -       /*
> -        * Allow execution of possible trampoline used
> -        * for switching between 4- and 5-level page tables
> -        * and relocated kernel image.
> -        */
> -
> -       efi_adjust_memory_range_protection(TRAMPOLINE_PLACEMENT_BASE,
> -                                          TRAMPOLINE_PLACEMENT_SIZE, 0);
> -
> -#ifdef CONFIG_64BIT
> -       if (image_base != (unsigned long)startup_32)
> -               efi_adjust_memory_range_protection(image_base, image_size, 0);
> -#else
> -       /*
> -        * Clear protection flags on a whole range of possible
> -        * addresses used for KASLR. We don't need to do that
> -        * on x86_64, since KASLR/extraction is performed after
> -        * dedicated identity page tables are built and we only
> -        * need to remove possible protection on relocated image
> -        * itself disregarding further relocations.
> -        */
> -       efi_adjust_memory_range_protection(LOAD_PHYSICAL_ADDR,
> -                                          KERNEL_IMAGE_SIZE - LOAD_PHYSICAL_ADDR,
> -                                          0);
> -#endif
> -}
> -
>  static const efi_char16_t apple[] = L"Apple";
>
> -static void setup_quirks(struct boot_params *boot_params,
> -                        unsigned long image_base,
> -                        unsigned long image_size)
> +static void setup_quirks(struct boot_params *boot_params)
>  {
>         efi_char16_t *fw_vendor = (efi_char16_t *)(unsigned long)
>                 efi_table_attr(efi_system_table, fw_vendor);
> @@ -269,9 +224,6 @@ static void setup_quirks(struct boot_params *boot_params,
>                 if (IS_ENABLED(CONFIG_APPLE_PROPERTIES))
>                         retrieve_apple_device_properties(boot_params);
>         }
> -
> -       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES))
> -               setup_memory_protection(image_base, image_size);
>  }
>
>  /*
> @@ -384,7 +336,7 @@ static void setup_graphics(struct boot_params *boot_params)
>  }
>
>
> -static void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
>  {
>         efi_bs_call(exit, handle, status, 0, NULL);
>         for(;;)
> @@ -707,8 +659,7 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle)
>  }
>
>  /*
> - * On success, we return the address of startup_32, which has potentially been
> - * relocated by efi_relocate_kernel.
> + * On success, we return extracted kernel entry point.
>   * On failure, we exit to the firmware via efi_exit instead of returning.
>   */
>  asmlinkage unsigned long efi_main(efi_handle_t handle,
> @@ -733,60 +684,6 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
>                 efi_dxe_table = NULL;
>         }
>
> -       /*
> -        * If the kernel isn't already loaded at a suitable address,
> -        * relocate it.
> -        *
> -        * It must be loaded above LOAD_PHYSICAL_ADDR.
> -        *
> -        * The maximum address for 64-bit is 1 << 46 for 4-level paging. This
> -        * is defined as the macro MAXMEM, but unfortunately that is not a
> -        * compile-time constant if 5-level paging is configured, so we instead
> -        * define our own macro for use here.
> -        *
> -        * For 32-bit, the maximum address is complicated to figure out, for
> -        * now use KERNEL_IMAGE_SIZE, which will be 512MiB, the same as what
> -        * KASLR uses.
> -        *
> -        * Also relocate it if image_offset is zero, i.e. the kernel wasn't
> -        * loaded by LoadImage, but rather by a bootloader that called the
> -        * handover entry. The reason we must always relocate in this case is
> -        * to handle the case of systemd-boot booting a unified kernel image,
> -        * which is a PE executable that contains the bzImage and an initrd as
> -        * COFF sections. The initrd section is placed after the bzImage
> -        * without ensuring that there are at least init_size bytes available
> -        * for the bzImage, and thus the compressed kernel's startup code may
> -        * overwrite the initrd unless it is moved out of the way.
> -        */
> -
> -       buffer_start = ALIGN(bzimage_addr - image_offset,
> -                            hdr->kernel_alignment);
> -       buffer_end = buffer_start + hdr->init_size;
> -
> -       if ((buffer_start < LOAD_PHYSICAL_ADDR)                              ||
> -           (IS_ENABLED(CONFIG_X86_32) && buffer_end > KERNEL_IMAGE_SIZE)    ||
> -           (IS_ENABLED(CONFIG_X86_64) && buffer_end > MAXMEM_X86_64_4LEVEL) ||
> -           (image_offset == 0)) {
> -               extern char _bss[];
> -
> -               status = efi_relocate_kernel(&bzimage_addr,
> -                                            (unsigned long)_bss - bzimage_addr,
> -                                            hdr->init_size,
> -                                            hdr->pref_address,
> -                                            hdr->kernel_alignment,
> -                                            LOAD_PHYSICAL_ADDR);
> -               if (status != EFI_SUCCESS) {
> -                       efi_err("efi_relocate_kernel() failed!\n");
> -                       goto fail;
> -               }
> -               /*
> -                * Now that we've copied the kernel elsewhere, we no longer
> -                * have a set up block before startup_32(), so reset image_offset
> -                * to zero in case it was set earlier.
> -                */
> -               image_offset = 0;
> -       }
> -
>  #ifdef CONFIG_CMDLINE_BOOL
>         status = efi_parse_options(CONFIG_CMDLINE);
>         if (status != EFI_SUCCESS) {
> @@ -843,7 +740,11 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
>
>         setup_efi_pci(boot_params);
>
> -       setup_quirks(boot_params, bzimage_addr, buffer_end - buffer_start);
> +       setup_quirks(boot_params);
> +
> +       bzimage_addr = extract_kernel_direct(handle, boot_params);
> +       if (!bzimage_addr)
> +               goto fail;
>
>         status = exit_boot(boot_params, handle);
>         if (status != EFI_SUCCESS) {
> diff --git a/drivers/firmware/efi/libstub/x86-stub.h b/drivers/firmware/efi/libstub/x86-stub.h
> new file mode 100644
> index 000000000000..baecc7c6e602
> --- /dev/null
> +++ b/drivers/firmware/efi/libstub/x86-stub.h
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _DRIVERS_FIRMWARE_EFI_X86STUB_H
> +#define _DRIVERS_FIRMWARE_EFI_X86STUB_H
> +
> +#include <linux/efi.h>
> +
> +#include <asm/bootparam.h>
> +
> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status);
> +unsigned long extract_kernel_direct(efi_handle_t handle, struct boot_params *boot_params);
> +void startup_32(struct boot_params *boot_params);
> +
> +#endif
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 19/26] x86/build: Cleanup tools/build.c
  2023-03-09 16:25     ` Evgeniy Baskov
@ 2023-03-09 16:50       ` Ard Biesheuvel
  2023-03-09 17:22         ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-09 16:50 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 9 Mar 2023 at 17:25, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-09 18:57, Ard Biesheuvel wrote:
> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> Use newer C standard. Since kernel requires C99 compiler now,
> >> we can make use of the new features to make the core more readable.
> >>
> >> Use mmap() for reading files also to make things simpler.
> >>
> >> Replace most magic numbers with defines.
> >>
> >> Should have no functional changes. This is done in preparation for the
> >> next changes that makes generated PE header more spec compliant.
> >>
> >> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >> ---
> >>  arch/x86/boot/tools/build.c | 387
> >> +++++++++++++++++++++++-------------
> >>  1 file changed, 245 insertions(+), 142 deletions(-)
> >>
> >> diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
> >> index bd247692b701..fbc5315af032 100644
> >> --- a/arch/x86/boot/tools/build.c
> >> +++ b/arch/x86/boot/tools/build.c
> >> @@ -25,20 +25,21 @@
> >>   * Substantially overhauled by H. Peter Anvin, April 2007
> >>   */
> >>
> >> +#include <fcntl.h>
> >> +#include <stdarg.h>
> >> +#include <stdint.h>
> >>  #include <stdio.h>
> >> -#include <string.h>
> >>  #include <stdlib.h>
> >> -#include <stdarg.h>
> >> -#include <sys/types.h>
> >> +#include <string.h>
> >> +#include <sys/mman.h>
> >>  #include <sys/stat.h>
> >> +#include <sys/types.h>
> >>  #include <unistd.h>
> >> -#include <fcntl.h>
> >> -#include <sys/mman.h>
> >> +
> >>  #include <tools/le_byteshift.h>
> >> +#include <linux/pe.h>
> >>
> >> -typedef unsigned char  u8;
> >> -typedef unsigned short u16;
> >> -typedef unsigned int   u32;
> >> +#define round_up(x, n) (((x) + (n) - 1) & ~((n) - 1))
> >>
> >>  #define DEFAULT_MAJOR_ROOT 0
> >>  #define DEFAULT_MINOR_ROOT 0
> >> @@ -48,8 +49,13 @@ typedef unsigned int   u32;
> >>  #define SETUP_SECT_MIN 5
> >>  #define SETUP_SECT_MAX 64
> >>
> >> +#define PARAGRAPH_SIZE 16
> >> +#define SECTOR_SIZE 512
> >> +#define FILE_ALIGNMENT 512
> >> +#define SECTION_ALIGNMENT 4096
> >> +
> >>  /* This must be large enough to hold the entire setup */
> >> -u8 buf[SETUP_SECT_MAX*512];
> >> +uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
> >>
> >>  #define PECOFF_RELOC_RESERVE 0x20
> >>
> >> @@ -59,6 +65,52 @@ u8 buf[SETUP_SECT_MAX*512];
> >>  #define PECOFF_COMPAT_RESERVE 0x0
> >>  #endif
> >>
> >> +#define RELOC_SECTION_SIZE 10
> >> +
> >> +/* PE header has different format depending on the architecture */
> >> +#ifdef CONFIG_X86_64
> >> +typedef struct pe32plus_opt_hdr pe_opt_hdr;
> >> +#else
> >> +typedef struct pe32_opt_hdr pe_opt_hdr;
> >> +#endif
> >> +
> >> +static inline struct pe_hdr *get_pe_header(uint8_t *buf)
> >> +{
> >> +       uint32_t pe_offset =
> >> get_unaligned_le32(buf+MZ_HEADER_PEADDR_OFFSET);
> >> +       return (struct pe_hdr *)(buf + pe_offset);
> >> +}
> >> +
> >> +static inline pe_opt_hdr *get_pe_opt_header(uint8_t *buf)
> >> +{
> >> +       return (pe_opt_hdr *)(get_pe_header(buf) + 1);
> >> +}
> >> +
> >> +static inline struct section_header *get_sections(uint8_t *buf)
> >> +{
> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> >> +       uint32_t n_data_dirs = get_unaligned_le32(&hdr->data_dirs);
> >> +       uint8_t *sections = (uint8_t *)(hdr + 1) +
> >> n_data_dirs*sizeof(struct data_dirent);
> >> +       return  (struct section_header *)sections;
> >> +}
> >> +
> >> +static inline struct data_directory *get_data_dirs(uint8_t *buf)
> >> +{
> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> >> +       return (struct data_directory *)(hdr + 1);
> >> +}
> >> +
> >> +#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
> >
> > Can we drop this conditional?
>
> Without CONFIG_EFI_DXE_MEM_ATTRIBUTES memory attributes are not
> getting applies anywhere, so this would break 'nokaslr' on UEFI
> implementations that honor section attributes.
>

How so? This only affects the mappings that are created by UEFI for
the decompressor binary, right?

> KASLR is already broken without that option on implementations
> that disallow execution of the free memory though. But unlike
> free memory, sections are more likely to get protected, I think.
>

We need to allocate those pages properly in any case (see my other
reply) so it is no longer free memory.

> >> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE |
> >> IMAGE_SCN_ALIGN_4096BYTES)
> >> +#define SCN_RX (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_EXECUTE |
> >> IMAGE_SCN_ALIGN_4096BYTES)
> >> +#define SCN_RO (IMAGE_SCN_MEM_READ | IMAGE_SCN_ALIGN_4096BYTES)
> >
> > Please drop the alignment flags - they don't apply to executable only
> > object files.
>
> Got it, will remove them in v5.
>
> >
> >> +#else
> >> +/* With memory protection disabled all sections are RWX */
> >> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | \
> >> +               IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
> >> +#define SCN_RX SCN_RW
> >> +#define SCN_RO SCN_RW
> >> +#endif
> >> +
> >>  static unsigned long efi32_stub_entry;
> >>  static unsigned long efi64_stub_entry;
> >>  static unsigned long efi_pe_entry;
> >> @@ -70,7 +122,7 @@ static unsigned long _end;
> >>
> >>
> >> /*----------------------------------------------------------------------*/
> >>
> >> -static const u32 crctab32[] = {
> >> +static const uint32_t crctab32[] = {
> >
> > Replacing all the type names makes this patch very messy. Can we back
> > that out please?
>
> Ok, I will revert them.
>
> >
> >>         0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
> >>         0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
> >>         0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
> >> @@ -125,12 +177,12 @@ static const u32 crctab32[] = {
> >>         0x2d02ef8d
> >>  };
> >>
> >> -static u32 partial_crc32_one(u8 c, u32 crc)
> >> +static uint32_t partial_crc32_one(uint8_t c, uint32_t crc)
> >>  {
> >>         return crctab32[(crc ^ c) & 0xff] ^ (crc >> 8);
> >>  }
> >>
> >> -static u32 partial_crc32(const u8 *s, int len, u32 crc)
> >> +static uint32_t partial_crc32(const uint8_t *s, int len, uint32_t
> >> crc)
> >>  {
> >>         while (len--)
> >>                 crc = partial_crc32_one(*s++, crc);
> >> @@ -152,57 +204,106 @@ static void usage(void)
> >>         die("Usage: build setup system zoffset.h image");
> >>  }
> >>
> >> +static void *map_file(const char *path, size_t *psize)
> >> +{
> >> +       struct stat statbuf;
> >> +       size_t size;
> >> +       void *addr;
> >> +       int fd;
> >> +
> >> +       fd = open(path, O_RDONLY);
> >> +       if (fd < 0)
> >> +               die("Unable to open `%s': %m", path);
> >> +       if (fstat(fd, &statbuf))
> >> +               die("Unable to stat `%s': %m", path);
> >> +
> >> +       size = statbuf.st_size;
> >> +       /*
> >> +        * Map one byte more, to allow adding null-terminator
> >> +        * for text files.
> >> +        */
> >> +       addr = mmap(NULL, size + 1, PROT_READ | PROT_WRITE,
> >> MAP_PRIVATE, fd, 0);
> >> +       if (addr == MAP_FAILED)
> >> +               die("Unable to mmap '%s': %m", path);
> >> +
> >> +       close(fd);
> >> +
> >> +       *psize = size;
> >> +       return addr;
> >> +}
> >> +
> >> +static void unmap_file(void *addr, size_t size)
> >> +{
> >> +       munmap(addr, size + 1);
> >> +}
> >> +
> >> +static void *map_output_file(const char *path, size_t size)
> >> +{
> >> +       void *addr;
> >> +       int fd;
> >> +
> >> +       fd = open(path, O_RDWR | O_CREAT, 0660);
> >> +       if (fd < 0)
> >> +               die("Unable to create `%s': %m", path);
> >> +
> >> +       if (ftruncate(fd, size))
> >> +               die("Unable to resize `%s': %m", path);
> >> +
> >> +       addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED,
> >> fd, 0);
> >> +       if (addr == MAP_FAILED)
> >> +               die("Unable to mmap '%s': %m", path);
> >> +
> >> +       return addr;
> >> +}
> >> +
> >>  #ifdef CONFIG_EFI_STUB
> >>
> >> -static void update_pecoff_section_header_fields(char *section_name,
> >> u32 vma, u32 size, u32 datasz, u32 offset)
> >> +static void update_pecoff_section_header_fields(char *section_name,
> >> uint32_t vma,
> >> +                                               uint32_t size,
> >> uint32_t datasz,
> >> +                                               uint32_t offset)
> >>  {
> >>         unsigned int pe_header;
> >>         unsigned short num_sections;
> >> -       u8 *section;
> >> +       struct section_header *section;
> >>
> >> -       pe_header = get_unaligned_le32(&buf[0x3c]);
> >> -       num_sections = get_unaligned_le16(&buf[pe_header + 6]);
> >> -
> >> -#ifdef CONFIG_X86_32
> >> -       section = &buf[pe_header + 0xa8];
> >> -#else
> >> -       section = &buf[pe_header + 0xb8];
> >> -#endif
> >> +       struct pe_hdr *hdr = get_pe_header(buf);
> >> +       num_sections = get_unaligned_le16(&hdr->sections);
> >> +       section = get_sections(buf);
> >>
> >>         while (num_sections > 0) {
> >> -               if (strncmp((char*)section, section_name, 8) == 0) {
> >> +               if (strncmp(section->name, section_name, 8) == 0) {
> >>                         /* section header size field */
> >> -                       put_unaligned_le32(size, section + 0x8);
> >> +                       put_unaligned_le32(size,
> >> &section->virtual_size);
> >>
> >>                         /* section header vma field */
> >> -                       put_unaligned_le32(vma, section + 0xc);
> >> +                       put_unaligned_le32(vma,
> >> &section->virtual_address);
> >>
> >>                         /* section header 'size of initialised data'
> >> field */
> >> -                       put_unaligned_le32(datasz, section + 0x10);
> >> +                       put_unaligned_le32(datasz,
> >> &section->raw_data_size);
> >>
> >>                         /* section header 'file offset' field */
> >> -                       put_unaligned_le32(offset, section + 0x14);
> >> +                       put_unaligned_le32(offset,
> >> &section->data_addr);
> >>
> >>                         break;
> >>                 }
> >> -               section += 0x28;
> >> +               section++;
> >>                 num_sections--;
> >>         }
> >>  }
> >>
> >> -static void update_pecoff_section_header(char *section_name, u32
> >> offset, u32 size)
> >> +static void update_pecoff_section_header(char *section_name, uint32_t
> >> offset, uint32_t size)
> >>  {
> >>         update_pecoff_section_header_fields(section_name, offset,
> >> size, size, offset);
> >>  }
> >>
> >>  static void update_pecoff_setup_and_reloc(unsigned int size)
> >>  {
> >> -       u32 setup_offset = 0x200;
> >> -       u32 reloc_offset = size - PECOFF_RELOC_RESERVE -
> >> PECOFF_COMPAT_RESERVE;
> >> +       uint32_t setup_offset = SECTOR_SIZE;
> >> +       uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE -
> >> PECOFF_COMPAT_RESERVE;
> >>  #ifdef CONFIG_EFI_MIXED
> >> -       u32 compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
> >> +       uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
> >>  #endif
> >> -       u32 setup_size = reloc_offset - setup_offset;
> >> +       uint32_t setup_size = reloc_offset - setup_offset;
> >>
> >>         update_pecoff_section_header(".setup", setup_offset,
> >> setup_size);
> >>         update_pecoff_section_header(".reloc", reloc_offset,
> >> PECOFF_RELOC_RESERVE);
> >> @@ -211,8 +312,8 @@ static void update_pecoff_setup_and_reloc(unsigned
> >> int size)
> >>          * Modify .reloc section contents with a single entry. The
> >>          * relocation is applied to offset 10 of the relocation
> >> section.
> >>          */
> >> -       put_unaligned_le32(reloc_offset + 10, &buf[reloc_offset]);
> >> -       put_unaligned_le32(10, &buf[reloc_offset + 4]);
> >> +       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE,
> >> &buf[reloc_offset]);
> >> +       put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset +
> >> 4]);
> >>
> >>  #ifdef CONFIG_EFI_MIXED
> >>         update_pecoff_section_header(".compat", compat_offset,
> >> PECOFF_COMPAT_RESERVE);
> >> @@ -224,19 +325,17 @@ static void
> >> update_pecoff_setup_and_reloc(unsigned int size)
> >>          */
> >>         buf[compat_offset] = 0x1;
> >>         buf[compat_offset + 1] = 0x8;
> >> -       put_unaligned_le16(0x14c, &buf[compat_offset + 2]);
> >> +       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset
> >> + 2]);
> >>         put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset +
> >> 4]);
> >>  #endif
> >>  }
> >>
> >> -static void update_pecoff_text(unsigned int text_start, unsigned int
> >> file_sz,
> >> +static unsigned int update_pecoff_sections(unsigned int text_start,
> >> unsigned int text_sz,
> >>                                unsigned int init_sz)
> >>  {
> >> -       unsigned int pe_header;
> >> -       unsigned int text_sz = file_sz - text_start;
> >> +       unsigned int file_sz = text_start + text_sz;
> >>         unsigned int bss_sz = init_sz - file_sz;
> >> -
> >> -       pe_header = get_unaligned_le32(&buf[0x3c]);
> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> >>
> >>         /*
> >>          * The PE/COFF loader may load the image at an address which
> >> is
> >> @@ -254,18 +353,20 @@ static void update_pecoff_text(unsigned int
> >> text_start, unsigned int file_sz,
> >>          * Size of code: Subtract the size of the first sector (512
> >> bytes)
> >>          * which includes the header.
> >>          */
> >> -       put_unaligned_le32(file_sz - 512 + bss_sz, &buf[pe_header +
> >> 0x1c]);
> >> +       put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz,
> >> &hdr->text_size);
> >>
> >>         /* Size of image */
> >> -       put_unaligned_le32(init_sz, &buf[pe_header + 0x50]);
> >> +       put_unaligned_le32(init_sz, &hdr->image_size);
> >>
> >>         /*
> >>          * Address of entry point for PE/COFF executable
> >>          */
> >> -       put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header +
> >> 0x28]);
> >> +       put_unaligned_le32(text_start + efi_pe_entry,
> >> &hdr->entry_point);
> >>
> >>         update_pecoff_section_header_fields(".text", text_start,
> >> text_sz + bss_sz,
> >>                                             text_sz, text_start);
> >> +
> >> +       return text_start + file_sz;
> >>  }
> >>
> >>  static int reserve_pecoff_reloc_section(int c)
> >> @@ -275,7 +376,7 @@ static int reserve_pecoff_reloc_section(int c)
> >>         return PECOFF_RELOC_RESERVE;
> >>  }
> >>
> >> -static void efi_stub_defaults(void)
> >> +static void efi_stub_update_defaults(void)
> >>  {
> >>         /* Defaults for old kernel */
> >>  #ifdef CONFIG_X86_32
> >> @@ -298,7 +399,7 @@ static void efi_stub_entry_update(void)
> >>
> >>  #ifdef CONFIG_EFI_MIXED
> >>         if (efi32_stub_entry != addr)
> >> -               die("32-bit and 64-bit EFI entry points do not
> >> match\n");
> >> +               die("32-bit and 64-bit EFI entry points do not
> >> match");
> >>  #endif
> >>  #endif
> >>         put_unaligned_le32(addr, &buf[0x264]);
> >> @@ -310,7 +411,7 @@ static inline void
> >> update_pecoff_setup_and_reloc(unsigned int size) {}
> >>  static inline void update_pecoff_text(unsigned int text_start,
> >>                                       unsigned int file_sz,
> >>                                       unsigned int init_sz) {}
> >> -static inline void efi_stub_defaults(void) {}
> >> +static inline void efi_stub_update_defaults(void) {}
> >>  static inline void efi_stub_entry_update(void) {}
> >>
> >>  static inline int reserve_pecoff_reloc_section(int c)
> >> @@ -338,20 +439,15 @@ static int reserve_pecoff_compat_section(int c)
> >>
> >>  static void parse_zoffset(char *fname)
> >>  {
> >> -       FILE *file;
> >> -       char *p;
> >> -       int c;
> >> +       size_t size;
> >> +       char *data, *p;
> >>
> >> -       file = fopen(fname, "r");
> >> -       if (!file)
> >> -               die("Unable to open `%s': %m", fname);
> >> -       c = fread(buf, 1, sizeof(buf) - 1, file);
> >> -       if (ferror(file))
> >> -               die("read-error on `zoffset.h'");
> >> -       fclose(file);
> >> -       buf[c] = 0;
> >> +       data = map_file(fname, &size);
> >>
> >> -       p = (char *)buf;
> >> +       /* We can do that, since we mapped one byte more */
> >> +       data[size] = 0;
> >> +
> >> +       p = (char *)data;
> >>
> >>         while (p && *p) {
> >>                 PARSE_ZOFS(p, efi32_stub_entry);
> >> @@ -367,82 +463,99 @@ static void parse_zoffset(char *fname)
> >>                 while (p && (*p == '\r' || *p == '\n'))
> >>                         p++;
> >>         }
> >> +
> >> +       unmap_file(data, size);
> >>  }
> >>
> >> -int main(int argc, char ** argv)
> >> +static unsigned int read_setup(char *path)
> >>  {
> >> -       unsigned int i, sz, setup_sectors, init_sz;
> >> -       int c;
> >> -       u32 sys_size;
> >> -       struct stat sb;
> >> -       FILE *file, *dest;
> >> -       int fd;
> >> -       void *kernel;
> >> -       u32 crc = 0xffffffffUL;
> >> -
> >> -       efi_stub_defaults();
> >> -
> >> -       if (argc != 5)
> >> -               usage();
> >> -       parse_zoffset(argv[3]);
> >> -
> >> -       dest = fopen(argv[4], "w");
> >> -       if (!dest)
> >> -               die("Unable to write `%s': %m", argv[4]);
> >> +       FILE *file;
> >> +       unsigned int setup_size, file_size;
> >>
> >>         /* Copy the setup code */
> >> -       file = fopen(argv[1], "r");
> >> +       file = fopen(path, "r");
> >>         if (!file)
> >> -               die("Unable to open `%s': %m", argv[1]);
> >> -       c = fread(buf, 1, sizeof(buf), file);
> >> +               die("Unable to open `%s': %m", path);
> >> +
> >> +       file_size = fread(buf, 1, sizeof(buf), file);
> >>         if (ferror(file))
> >>                 die("read-error on `setup'");
> >> -       if (c < 1024)
> >> +
> >> +       if (file_size < 2 * SECTOR_SIZE)
> >>                 die("The setup must be at least 1024 bytes");
> >> -       if (get_unaligned_le16(&buf[510]) != 0xAA55)
> >> +
> >> +       if (get_unaligned_le16(&buf[SECTOR_SIZE - 2]) != 0xAA55)
> >>                 die("Boot block hasn't got boot flag (0xAA55)");
> >> +
> >>         fclose(file);
> >>
> >> -       c += reserve_pecoff_compat_section(c);
> >> -       c += reserve_pecoff_reloc_section(c);
> >> +       /* Reserve space for PE sections */
> >> +       file_size += reserve_pecoff_compat_section(file_size);
> >> +       file_size += reserve_pecoff_reloc_section(file_size);
> >>
> >>         /* Pad unused space with zeros */
> >> -       setup_sectors = (c + 511) / 512;
> >> -       if (setup_sectors < SETUP_SECT_MIN)
> >> -               setup_sectors = SETUP_SECT_MIN;
> >> -       i = setup_sectors*512;
> >> -       memset(buf+c, 0, i-c);
> >>
> >> -       update_pecoff_setup_and_reloc(i);
> >> +       setup_size = round_up(file_size, SECTOR_SIZE);
> >> +
> >> +       if (setup_size < SETUP_SECT_MIN * SECTOR_SIZE)
> >> +               setup_size = SETUP_SECT_MIN * SECTOR_SIZE;
> >> +
> >> +       /*
> >> +        * Global buffer is already initialised
> >> +        * to 0, but just in case, zero out padding.
> >> +        */
> >> +
> >> +       memset(buf + file_size, 0, setup_size - file_size);
> >> +
> >> +       return setup_size;
> >> +}
> >> +
> >> +int main(int argc, char **argv)
> >> +{
> >> +       size_t kern_file_size;
> >> +       unsigned int setup_size;
> >> +       unsigned int setup_sectors;
> >> +       unsigned int init_size;
> >> +       unsigned int total_size;
> >> +       unsigned int kern_size;
> >> +       void *kernel;
> >> +       uint32_t crc = 0xffffffffUL;
> >> +       uint8_t *output;
> >> +
> >> +       if (argc != 5)
> >> +               usage();
> >> +
> >> +       efi_stub_update_defaults();
> >> +       parse_zoffset(argv[3]);
> >> +
> >> +       setup_size = read_setup(argv[1]);
> >> +
> >> +       setup_sectors = setup_size/SECTOR_SIZE;
> >>
> >>         /* Set the default root device */
> >>         put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]);
> >>
> >> -       /* Open and stat the kernel file */
> >> -       fd = open(argv[2], O_RDONLY);
> >> -       if (fd < 0)
> >> -               die("Unable to open `%s': %m", argv[2]);
> >> -       if (fstat(fd, &sb))
> >> -               die("Unable to stat `%s': %m", argv[2]);
> >> -       sz = sb.st_size;
> >> -       kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0);
> >> -       if (kernel == MAP_FAILED)
> >> -               die("Unable to mmap '%s': %m", argv[2]);
> >> -       /* Number of 16-byte paragraphs, including space for a 4-byte
> >> CRC */
> >> -       sys_size = (sz + 15 + 4) / 16;
> >> +       /* Map kernel file to memory */
> >> +       kernel = map_file(argv[2], &kern_file_size);
> >> +
> >>  #ifdef CONFIG_EFI_STUB
> >> -       /*
> >> -        * COFF requires minimum 32-byte alignment of sections, and
> >> -        * adding a signature is problematic without that alignment.
> >> -        */
> >> -       sys_size = (sys_size + 1) & ~1;
> >> +       /* PE specification require 512-byte minimum section file
> >> alignment */
> >> +       kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
> >> +       update_pecoff_setup_and_reloc(setup_size);
> >> +#else
> >> +       /* Number of 16-byte paragraphs, including space for a 4-byte
> >> CRC */
> >> +       kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
> >>  #endif
> >>
> >>         /* Patch the setup code with the appropriate size parameters
> >> */
> >> -       buf[0x1f1] = setup_sectors-1;
> >> -       put_unaligned_le32(sys_size, &buf[0x1f4]);
> >> +       buf[0x1f1] = setup_sectors - 1;
> >> +       put_unaligned_le32(kern_size/PARAGRAPH_SIZE, &buf[0x1f4]);
> >> +
> >> +       /* Update kernel_info offset. */
> >> +       put_unaligned_le32(kernel_info, &buf[0x268]);
> >> +
> >> +       init_size = get_unaligned_le32(&buf[0x260]);
> >>
> >> -       init_sz = get_unaligned_le32(&buf[0x260]);
> >>  #ifdef CONFIG_EFI_STUB
> >>         /*
> >>          * The decompression buffer will start at ImageBase. When
> >> relocating
> >> @@ -458,45 +571,35 @@ int main(int argc, char ** argv)
> >>          * For future-proofing, increase init_sz if necessary.
> >>          */
> >>
> >> -       if (init_sz - _end < i + _ehead) {
> >> -               init_sz = (i + _ehead + _end + 4095) & ~4095;
> >> -               put_unaligned_le32(init_sz, &buf[0x260]);
> >> +       if (init_size - _end < setup_size + _ehead) {
> >> +               init_size = round_up(setup_size + _ehead + _end,
> >> SECTION_ALIGNMENT);
> >> +               put_unaligned_le32(init_size, &buf[0x260]);
> >>         }
> >> -#endif
> >> -       update_pecoff_text(setup_sectors * 512, i + (sys_size * 16),
> >> init_sz);
> >>
> >> -       efi_stub_entry_update();
> >> -
> >> -       /* Update kernel_info offset. */
> >> -       put_unaligned_le32(kernel_info, &buf[0x268]);
> >> +       total_size = update_pecoff_sections(setup_size, kern_size,
> >> init_size);
> >>
> >> -       crc = partial_crc32(buf, i, crc);
> >> -       if (fwrite(buf, 1, i, dest) != i)
> >> -               die("Writing setup failed");
> >> +       efi_stub_entry_update();
> >> +#else
> >> +       (void)init_size;
> >> +       total_size = setup_size + kern_size;
> >> +#endif
> >>
> >> -       /* Copy the kernel code */
> >> -       crc = partial_crc32(kernel, sz, crc);
> >> -       if (fwrite(kernel, 1, sz, dest) != sz)
> >> -               die("Writing kernel failed");
> >> +       output = map_output_file(argv[4], total_size);
> >>
> >> -       /* Add padding leaving 4 bytes for the checksum */
> >> -       while (sz++ < (sys_size*16) - 4) {
> >> -               crc = partial_crc32_one('\0', crc);
> >> -               if (fwrite("\0", 1, 1, dest) != 1)
> >> -                       die("Writing padding failed");
> >> -       }
> >> +       memcpy(output, buf, setup_size);
> >> +       memcpy(output + setup_size, kernel, kern_file_size);
> >> +       memset(output + setup_size + kern_file_size, 0, kern_size -
> >> kern_file_size);
> >>
> >> -       /* Write the CRC */
> >> -       put_unaligned_le32(crc, buf);
> >> -       if (fwrite(buf, 1, 4, dest) != 4)
> >> -               die("Writing CRC failed");
> >> +       /* Calculate and write kernel checksum. */
> >> +       crc = partial_crc32(output, total_size - 4, crc);
> >> +       put_unaligned_le32(crc, &output[total_size - 4]);
> >>
> >> -       /* Catch any delayed write failures */
> >> -       if (fclose(dest))
> >> -               die("Writing image failed");
> >> +       /* Catch any delayed write failures. */
> >> +       if (munmap(output, total_size) < 0)
> >> +               die("Writing kernel failed");
> >>
> >> -       close(fd);
> >> +       unmap_file(kernel, kern_file_size);
> >>
> >> -       /* Everything is OK */
> >> +       /* Everything is OK. */
> >>         return 0;
> >>  }
> >> --
> >> 2.37.4
> >>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub
  2023-03-09 16:00   ` Ard Biesheuvel
@ 2023-03-09 17:05     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-09 17:05 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-09 19:00, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Doing it that way allows setting up stricter memory attributes,
>> simplifies boot code path and removes potential relocation
>> of kernel image.
>> 
>> Wire up required interfaces and minimally initialize zero page
>> fields needed for it to function correctly.
>> 
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> ---
>>  arch/x86/boot/compressed/head_32.S            |  50 ++++-
>>  arch/x86/boot/compressed/head_64.S            |  58 ++++-
>>  drivers/firmware/efi/Kconfig                  |   2 +
>>  drivers/firmware/efi/libstub/Makefile         |   2 +-
>>  .../firmware/efi/libstub/x86-extract-direct.c | 208 
>> ++++++++++++++++++
>>  drivers/firmware/efi/libstub/x86-stub.c       | 119 +---------
>>  drivers/firmware/efi/libstub/x86-stub.h       |  14 ++
>>  7 files changed, 338 insertions(+), 115 deletions(-)
>>  create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
>>  create mode 100644 drivers/firmware/efi/libstub/x86-stub.h
>> 
>> diff --git a/arch/x86/boot/compressed/head_32.S 
>> b/arch/x86/boot/compressed/head_32.S
>> index ead6007df1e5..0be75e5072ae 100644
>> --- a/arch/x86/boot/compressed/head_32.S
>> +++ b/arch/x86/boot/compressed/head_32.S
>> @@ -152,11 +152,57 @@ SYM_FUNC_END(startup_32)
>> 
>>  #ifdef CONFIG_EFI_STUB
>>  SYM_FUNC_START(efi32_stub_entry)
>> +/*
>> + * Calculate the delta between where we were compiled to run
>> + * at and where we were actually loaded at.  This can only be done
>> + * with a short local call on x86.  Nothing  else will tell us what
>> + * address we are running at.  The reserved chunk of the real-mode
>> + * data at 0x1e4 (defined as a scratch field) are used as the stack
>> + * for this calculation. Only 4 bytes are needed.
>> + */
> 
> Please drop this comment

Will do.

> 
>> +       call    1f
>> +1:     popl    %ebx
>> +       addl    $_GLOBAL_OFFSET_TABLE_+(.-1b), %ebx
> 
> Please drop this and ...
> 
>> +
>> +       /* Clear BSS */
>> +       xorl    %eax, %eax
>> +       leal    _bss@GOTOFF(%ebx), %edi
>> +       leal    _ebss@GOTOFF(%ebx), %ecx
> 
> just use (_bss - 1b) here (etc)

I was trying to be consistent with the code below, but it will
indeed be better to do this like that. I guess, this will be
fine to stop putting GOT address to the %ebx, since the extraction
code does not use calls via PLT?

> 
>> +       subl    %edi, %ecx
>> +       shrl    $2, %ecx
>> +       rep     stosl
>> +
>>         add     $0x4, %esp
>>         movl    8(%esp), %esi   /* save boot_params pointer */
>> +       movl    %edx, %edi      /* save GOT address */
> 
> What does this do?

Hmm... It seems to be a remnant of the previous implementation
that I forgot to remove. I will remove that in the v5.

> 
>>         call    efi_main
>> -       /* efi_main returns the possibly relocated address of 
>> startup_32 */
>> -       jmp     *%eax
>> +       movl    %eax, %ecx
>> +
>> +       /*
>> +        * efi_main returns the possibly
>> +        * relocated address of extracted kernel entry point.
>> +        */
>> +
>> +       cli
>> +
>> +       /* Load new GDT */
>> +       leal    gdt@GOTOFF(%ebx), %eax
>> +       movl    %eax, 2(%eax)
>> +       lgdt    (%eax)
>> +
>> +       /* Load segment registers with our descriptors */
>> +       movl    $__BOOT_DS, %eax
>> +       movl    %eax, %ds
>> +       movl    %eax, %es
>> +       movl    %eax, %fs
>> +       movl    %eax, %gs
>> +       movl    %eax, %ss
>> +
>> +       /* Zero EFLAGS */
>> +       pushl   $0
>> +       popfl
>> +
>> +       jmp     *%ecx
>>  SYM_FUNC_END(efi32_stub_entry)
>>  SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
>>  #endif
> ...

Thanks,
Evgeniy Baskov

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub
  2023-03-09 16:49   ` Ard Biesheuvel
@ 2023-03-09 17:10     ` Evgeniy Baskov
  2023-03-09 17:11       ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-09 17:10 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-09 19:49, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Doing it that way allows setting up stricter memory attributes,
>> simplifies boot code path and removes potential relocation
>> of kernel image.
>> 
>> Wire up required interfaces and minimally initialize zero page
>> fields needed for it to function correctly.
>> 
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> 
> OK I just realized that there is a problem with this approach: since
> we now decompress the image while running in the EFI stub (i.e.,
> before ExitBootServices()), we cannot just randomly pick a
> EFI_CONVENTIONAL_MEMORY region to place the kernel, we need to
> allocate the pages using the boot services. Otherwise, subsequent
> allocations (or concurrent ones occurring in the firmware in event
> handlers etc) may land right in the middle, which is unlikely to be
> what we want.

It does allocate pages for the kernel.
I've marked the place below.

> 
> 
>> ---
>>  arch/x86/boot/compressed/head_32.S            |  50 ++++-
>>  arch/x86/boot/compressed/head_64.S            |  58 ++++-
>>  drivers/firmware/efi/Kconfig                  |   2 +
>>  drivers/firmware/efi/libstub/Makefile         |   2 +-
>>  .../firmware/efi/libstub/x86-extract-direct.c | 208 
>> ++++++++++++++++++
>>  drivers/firmware/efi/libstub/x86-stub.c       | 119 +---------
>>  drivers/firmware/efi/libstub/x86-stub.h       |  14 ++
>>  7 files changed, 338 insertions(+), 115 deletions(-)
>>  create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
>>  create mode 100644 drivers/firmware/efi/libstub/x86-stub.h
>> 
>> diff --git a/arch/x86/boot/compressed/head_32.S 
>> b/arch/x86/boot/compressed/head_32.S
>> index ead6007df1e5..0be75e5072ae 100644
>> --- a/arch/x86/boot/compressed/head_32.S
>> +++ b/arch/x86/boot/compressed/head_32.S
>> @@ -152,11 +152,57 @@ SYM_FUNC_END(startup_32)
>> 
>>  #ifdef CONFIG_EFI_STUB
>>  SYM_FUNC_START(efi32_stub_entry)
>> +/*
>> + * Calculate the delta between where we were compiled to run
>> + * at and where we were actually loaded at.  This can only be done
>> + * with a short local call on x86.  Nothing  else will tell us what
>> + * address we are running at.  The reserved chunk of the real-mode
>> + * data at 0x1e4 (defined as a scratch field) are used as the stack
>> + * for this calculation. Only 4 bytes are needed.
>> + */
>> +       call    1f
>> +1:     popl    %ebx
>> +       addl    $_GLOBAL_OFFSET_TABLE_+(.-1b), %ebx
>> +
>> +       /* Clear BSS */
>> +       xorl    %eax, %eax
>> +       leal    _bss@GOTOFF(%ebx), %edi
>> +       leal    _ebss@GOTOFF(%ebx), %ecx
>> +       subl    %edi, %ecx
>> +       shrl    $2, %ecx
>> +       rep     stosl
>> +
>>         add     $0x4, %esp
>>         movl    8(%esp), %esi   /* save boot_params pointer */
>> +       movl    %edx, %edi      /* save GOT address */
>>         call    efi_main
>> -       /* efi_main returns the possibly relocated address of 
>> startup_32 */
>> -       jmp     *%eax
>> +       movl    %eax, %ecx
>> +
>> +       /*
>> +        * efi_main returns the possibly
>> +        * relocated address of extracted kernel entry point.
>> +        */
>> +
>> +       cli
>> +
>> +       /* Load new GDT */
>> +       leal    gdt@GOTOFF(%ebx), %eax
>> +       movl    %eax, 2(%eax)
>> +       lgdt    (%eax)
>> +
>> +       /* Load segment registers with our descriptors */
>> +       movl    $__BOOT_DS, %eax
>> +       movl    %eax, %ds
>> +       movl    %eax, %es
>> +       movl    %eax, %fs
>> +       movl    %eax, %gs
>> +       movl    %eax, %ss
>> +
>> +       /* Zero EFLAGS */
>> +       pushl   $0
>> +       popfl
>> +
>> +       jmp     *%ecx
>>  SYM_FUNC_END(efi32_stub_entry)
>>  SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
>>  #endif
>> diff --git a/arch/x86/boot/compressed/head_64.S 
>> b/arch/x86/boot/compressed/head_64.S
>> index 2dd8be0583d2..7cfef7bd0424 100644
>> --- a/arch/x86/boot/compressed/head_64.S
>> +++ b/arch/x86/boot/compressed/head_64.S
>> @@ -529,12 +529,64 @@ SYM_CODE_END(startup_64)
>>         .org 0x390
>>  #endif
>>  SYM_FUNC_START(efi64_stub_entry)
>> +       /* Preserve first parameter */
>> +       movq    %rdi, %r10
>> +
>> +       /* Clear BSS */
>> +       xorl    %eax, %eax
>> +       leaq    _bss(%rip), %rdi
>> +       leaq    _ebss(%rip), %rcx
>> +       subq    %rdi, %rcx
>> +       shrq    $3, %rcx
>> +       rep     stosq
>> +
>>         and     $~0xf, %rsp                     /* realign the stack 
>> */
>>         movq    %rdx, %rbx                      /* save boot_params 
>> pointer */
>> +       movq    %r10, %rdi
>>         call    efi_main
>> -       movq    %rbx,%rsi
>> -       leaq    rva(startup_64)(%rax), %rax
>> -       jmp     *%rax
>> +
>> +       cld
>> +       cli
>> +
>> +       movq    %rbx, %rdi /* boot_params */
>> +       movq    %rax, %rsi /* decompressed kernel address */
>> +
>> +       /* Make sure we have GDT with 32-bit code segment */
>> +       leaq    gdt64(%rip), %rax
>> +       addq    %rax, 2(%rax)
>> +       lgdt    (%rax)
>> +
>> +       /* Setup data segments. */
>> +       xorl    %eax, %eax
>> +       movl    %eax, %ds
>> +       movl    %eax, %es
>> +       movl    %eax, %ss
>> +       movl    %eax, %fs
>> +       movl    %eax, %gs
>> +
>> +       pushq   %rsi
>> +       pushq   %rdi
>> +
>> +       call    load_stage1_idt
>> +       call    enable_nx_if_supported
>> +
>> +       call    trampoline_pgtable_init
>> +       movq    %rax, %rdx
>> +
>> +
>> +       /* Swap %rsi and %rsi */
>> +       popq    %rsi
>> +       popq    %rdi
>> +
>> +       /* Save the trampoline address in RCX */
>> +       movq    trampoline_32bit(%rip), %rcx
>> +
>> +       /* Switch to compatibility mode (CS.L = 0 CS.D = 1) via far 
>> return */
>> +       pushq   $__KERNEL32_CS
>> +       leaq    TRAMPOLINE_32BIT_CODE_OFFSET(%rcx), %rax
>> +       pushq   %rax
>> +       lretq
>> +
>>  SYM_FUNC_END(efi64_stub_entry)
>>  SYM_FUNC_ALIAS(efi_stub_entry, efi64_stub_entry)
>>  #endif
>> diff --git a/drivers/firmware/efi/Kconfig 
>> b/drivers/firmware/efi/Kconfig
>> index 043ca31c114e..f50c2a84a754 100644
>> --- a/drivers/firmware/efi/Kconfig
>> +++ b/drivers/firmware/efi/Kconfig
>> @@ -58,6 +58,8 @@ config EFI_DXE_MEM_ATTRIBUTES
>>           Use DXE services to check and alter memory protection
>>           attributes during boot via EFISTUB to ensure that memory
>>           ranges used by the kernel are writable and executable.
>> +         This option also enables stricter memory attributes
>> +         on compressed kernel PE image.
>> 
>>  config EFI_PARAMS_FROM_FDT
>>         bool
>> diff --git a/drivers/firmware/efi/libstub/Makefile 
>> b/drivers/firmware/efi/libstub/Makefile
>> index be8b8c6e8b40..99b81c95344c 100644
>> --- a/drivers/firmware/efi/libstub/Makefile
>> +++ b/drivers/firmware/efi/libstub/Makefile
>> @@ -88,7 +88,7 @@ lib-$(CONFIG_EFI_GENERIC_STUB)        += efi-stub.o 
>> string.o intrinsics.o systable.o \
>> 
>>  lib-$(CONFIG_ARM)              += arm32-stub.o
>>  lib-$(CONFIG_ARM64)            += arm64.o arm64-stub.o arm64-entry.o 
>> smbios.o
>> -lib-$(CONFIG_X86)              += x86-stub.o
>> +lib-$(CONFIG_X86)              += x86-stub.o x86-extract-direct.o
>>  lib-$(CONFIG_RISCV)            += riscv.o riscv-stub.o
>>  lib-$(CONFIG_LOONGARCH)                += loongarch.o 
>> loongarch-stub.o
>> 
>> diff --git a/drivers/firmware/efi/libstub/x86-extract-direct.c 
>> b/drivers/firmware/efi/libstub/x86-extract-direct.c
>> new file mode 100644
>> index 000000000000..4ecbc4a9b3ed
>> --- /dev/null
>> +++ b/drivers/firmware/efi/libstub/x86-extract-direct.c
>> @@ -0,0 +1,208 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +
>> +#include <linux/acpi.h>
>> +#include <linux/efi.h>
>> +#include <linux/elf.h>
>> +#include <linux/stddef.h>
>> +
>> +#include <asm/efi.h>
>> +#include <asm/e820/types.h>
>> +#include <asm/desc.h>
>> +#include <asm/boot.h>
>> +#include <asm/bootparam_utils.h>
>> +#include <asm/shared/extract.h>
>> +#include <asm/shared/pgtable.h>
>> +
>> +#include "efistub.h"
>> +#include "x86-stub.h"
>> +
>> +static efi_handle_t image_handle;
>> +
>> +static void do_puthex(unsigned long value)
>> +{
>> +       efi_printk("%08lx", value);
>> +}
>> +
>> +static void do_putstr(const char *msg)
>> +{
>> +       efi_printk("%s", msg);
>> +}
>> +
>> +static unsigned long do_map_range(unsigned long start,
>> +                                 unsigned long end,
>> +                                 unsigned int flags)
>> +{
>> +       efi_status_t status;
>> +
>> +       unsigned long size = end - start;
>> +
>> +       if (flags & MAP_ALLOC) {
>> +               unsigned long addr;
>> +
>> +               status = efi_low_alloc_above(size, 
>> CONFIG_PHYSICAL_ALIGN,
>> +                                            &addr, start);

Memory for the kernel image is allocated here.
This function is getting called from the boot/compressed/misc.c with 
MAP_ALLOC flag
when the address for the kernel is picked.

>> +               if (status != EFI_SUCCESS) {
>> +                       efi_err("Unable to allocate memory for 
>> uncompressed kernel");
>> +                       efi_exit(image_handle, EFI_OUT_OF_RESOURCES);
>> +               }
>> +
>> +               if (start != addr) {
>> +                       efi_debug("Unable to allocate at given 
>> address"
>> +                                 " (desired=0x%lx, actual=0x%lx)",
>> +                                 (unsigned long)start, addr);
>> +                       start = addr;
>> +               }
>> +       }
>> +
>> +       if ((flags & (MAP_PROTECT | MAP_ALLOC)) &&
>> +           IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
>> +               unsigned long attr = 0;
>> +
>> +               if (!(flags & MAP_EXEC))
>> +                       attr |= EFI_MEMORY_XP;
>> +
>> +               if (!(flags & MAP_WRITE))
>> +                       attr |= EFI_MEMORY_RO;
>> +
>> +               status = efi_adjust_memory_range_protection(start, 
>> size, attr);
>> +               if (status != EFI_SUCCESS)
>> +                       efi_err("Unable to protect memory range");
>> +       }
>> +
>> +       return start;
>> +}
>> +
>> +/*
>> + * Trampoline takes 3 pages and can be loaded in first megabyte of 
>> memory
>> + * with its end placed between 0 and 640k where BIOS might start.
>> + * (see arch/x86/boot/compressed/pgtable_64.c)
>> + */
>> +
>> +#ifdef CONFIG_64BIT
>> +static efi_status_t prepare_trampoline(void)
>> +{
>> +       efi_status_t status;
>> +
>> +       status = efi_allocate_pages(TRAMPOLINE_32BIT_SIZE,
>> +                                   (unsigned long 
>> *)&trampoline_32bit,
>> +                                   TRAMPOLINE_32BIT_PLACEMENT_MAX);
>> +
>> +       if (status != EFI_SUCCESS)
>> +               return status;
>> +
>> +       unsigned long trampoline_start = (unsigned 
>> long)trampoline_32bit;
>> +
>> +       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
>> +
>> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
>> +               /* First page of trampoline is a top level page table 
>> */
>> +               efi_adjust_memory_range_protection(trampoline_start,
>> +                                                  PAGE_SIZE,
>> +                                                  EFI_MEMORY_XP);
>> +       }
>> +
>> +       /* Second page of trampoline is the code (with a padding) */
>> +
>> +       void *caddr = (void *)trampoline_32bit + 
>> TRAMPOLINE_32BIT_CODE_OFFSET;
>> +
>> +       memcpy(caddr, trampoline_32bit_src, 
>> TRAMPOLINE_32BIT_CODE_SIZE);
>> +
>> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
>> +               efi_adjust_memory_range_protection((unsigned 
>> long)caddr,
>> +                                                  PAGE_SIZE,
>> +                                                  EFI_MEMORY_RO);
>> +
>> +               /* And the last page of trampoline is the stack */
>> +
>> +               efi_adjust_memory_range_protection(trampoline_start + 
>> 2 * PAGE_SIZE,
>> +                                                  PAGE_SIZE,
>> +                                                  EFI_MEMORY_XP);
>> +       }
>> +
>> +       return EFI_SUCCESS;
>> +}
>> +#else
>> +static inline efi_status_t prepare_trampoline(void)
>> +{
>> +       return EFI_SUCCESS;
>> +}
>> +#endif
>> +
>> +static efi_status_t init_loader_data(efi_handle_t handle,
>> +                                    struct boot_params *params,
>> +                                    struct efi_boot_memmap **map)
>> +{
>> +       struct efi_info *efi = (void *)&params->efi_info;
>> +       efi_status_t status;
>> +
>> +       status = efi_get_memory_map(map, false);
>> +
>> +       if (status != EFI_SUCCESS) {
>> +               efi_err("Unable to get EFI memory map...\n");
>> +               return status;
>> +       }
>> +
>> +       const char *signature = efi_is_64bit() ? 
>> EFI64_LOADER_SIGNATURE
>> +                                              : 
>> EFI32_LOADER_SIGNATURE;
>> +
>> +       memcpy(&efi->efi_loader_signature, signature, sizeof(__u32));
>> +
>> +       efi->efi_memdesc_size = (*map)->desc_size;
>> +       efi->efi_memdesc_version = (*map)->desc_ver;
>> +       efi->efi_memmap_size = (*map)->map_size;
>> +
>> +       efi_set_u64_split((unsigned long)(*map)->map,
>> +                         &efi->efi_memmap, &efi->efi_memmap_hi);
>> +
>> +       efi_set_u64_split((unsigned long)efi_system_table,
>> +                         &efi->efi_systab, &efi->efi_systab_hi);
>> +
>> +       image_handle = handle;
>> +
>> +       return EFI_SUCCESS;
>> +}
>> +
>> +static void free_loader_data(struct boot_params *params, struct 
>> efi_boot_memmap *map)
>> +{
>> +       struct efi_info *efi = (void *)&params->efi_info;
>> +
>> +       efi_bs_call(free_pool, map);
>> +
>> +       efi->efi_memdesc_size = 0;
>> +       efi->efi_memdesc_version = 0;
>> +       efi->efi_memmap_size = 0;
>> +       efi_set_u64_split(0, &efi->efi_memmap, &efi->efi_memmap_hi);
>> +}
>> +
>> +extern unsigned char input_data[];
>> +extern unsigned int input_len, output_len;
>> +
>> +unsigned long extract_kernel_direct(efi_handle_t handle, struct 
>> boot_params *params)
>> +{
>> +
>> +       void *res;
>> +       efi_status_t status;
>> +       struct efi_extract_callbacks cb = { 0 };
>> +
>> +       status = prepare_trampoline();
>> +
>> +       if (status != EFI_SUCCESS)
>> +               return 0;
>> +
>> +       /* Prepare environment for do_extract_kernel() call */
>> +       struct efi_boot_memmap *map = NULL;
>> +       status = init_loader_data(handle, params, &map);
>> +
>> +       if (status != EFI_SUCCESS)
>> +               return 0;
>> +
>> +       cb.puthex = do_puthex;
>> +       cb.putstr = do_putstr;
>> +       cb.map_range = do_map_range;
>> +
>> +       res = efi_extract_kernel(params, &cb, input_data, input_len, 
>> output_len);
>> +
>> +       free_loader_data(params, map);
>> +
>> +       return (unsigned long)res;
>> +}
>> diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
>> b/drivers/firmware/efi/libstub/x86-stub.c
>> index 7fb1eff88a18..1d1ab1911fd3 100644
>> --- a/drivers/firmware/efi/libstub/x86-stub.c
>> +++ b/drivers/firmware/efi/libstub/x86-stub.c
>> @@ -17,6 +17,7 @@
>>  #include <asm/boot.h>
>> 
>>  #include "efistub.h"
>> +#include "x86-stub.h"
>> 
>>  /* Maximum physical address for 64-bit kernel with 4-level paging */
>>  #define MAXMEM_X86_64_4LEVEL (1ull << 46)
>> @@ -24,7 +25,7 @@
>>  const efi_system_table_t *efi_system_table;
>>  const efi_dxe_services_table_t *efi_dxe_table;
>>  u32 image_offset __section(".data");
>> -static efi_loaded_image_t *image = NULL;
>> +static efi_loaded_image_t *image __section(".data");
>> 
>>  static efi_status_t
>>  preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct 
>> pci_setup_rom **__rom)
>> @@ -212,55 +213,9 @@ static void 
>> retrieve_apple_device_properties(struct boot_params *boot_params)
>>         }
>>  }
>> 
>> -/*
>> - * Trampoline takes 2 pages and can be loaded in first megabyte of 
>> memory
>> - * with its end placed between 128k and 640k where BIOS might start.
>> - * (see arch/x86/boot/compressed/pgtable_64.c)
>> - *
>> - * We cannot find exact trampoline placement since memory map
>> - * can be modified by UEFI, and it can alter the computed address.
>> - */
>> -
>> -#define TRAMPOLINE_PLACEMENT_BASE ((128 - 8)*1024)
>> -#define TRAMPOLINE_PLACEMENT_SIZE (640*1024 - (128 - 8)*1024)
>> -
>> -void startup_32(struct boot_params *boot_params);
>> -
>> -static void
>> -setup_memory_protection(unsigned long image_base, unsigned long 
>> image_size)
>> -{
>> -       /*
>> -        * Allow execution of possible trampoline used
>> -        * for switching between 4- and 5-level page tables
>> -        * and relocated kernel image.
>> -        */
>> -
>> -       efi_adjust_memory_range_protection(TRAMPOLINE_PLACEMENT_BASE,
>> -                                          TRAMPOLINE_PLACEMENT_SIZE, 
>> 0);
>> -
>> -#ifdef CONFIG_64BIT
>> -       if (image_base != (unsigned long)startup_32)
>> -               efi_adjust_memory_range_protection(image_base, 
>> image_size, 0);
>> -#else
>> -       /*
>> -        * Clear protection flags on a whole range of possible
>> -        * addresses used for KASLR. We don't need to do that
>> -        * on x86_64, since KASLR/extraction is performed after
>> -        * dedicated identity page tables are built and we only
>> -        * need to remove possible protection on relocated image
>> -        * itself disregarding further relocations.
>> -        */
>> -       efi_adjust_memory_range_protection(LOAD_PHYSICAL_ADDR,
>> -                                          KERNEL_IMAGE_SIZE - 
>> LOAD_PHYSICAL_ADDR,
>> -                                          0);
>> -#endif
>> -}
>> -
>>  static const efi_char16_t apple[] = L"Apple";
>> 
>> -static void setup_quirks(struct boot_params *boot_params,
>> -                        unsigned long image_base,
>> -                        unsigned long image_size)
>> +static void setup_quirks(struct boot_params *boot_params)
>>  {
>>         efi_char16_t *fw_vendor = (efi_char16_t *)(unsigned long)
>>                 efi_table_attr(efi_system_table, fw_vendor);
>> @@ -269,9 +224,6 @@ static void setup_quirks(struct boot_params 
>> *boot_params,
>>                 if (IS_ENABLED(CONFIG_APPLE_PROPERTIES))
>>                         retrieve_apple_device_properties(boot_params);
>>         }
>> -
>> -       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES))
>> -               setup_memory_protection(image_base, image_size);
>>  }
>> 
>>  /*
>> @@ -384,7 +336,7 @@ static void setup_graphics(struct boot_params 
>> *boot_params)
>>  }
>> 
>> 
>> -static void __noreturn efi_exit(efi_handle_t handle, efi_status_t 
>> status)
>> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
>>  {
>>         efi_bs_call(exit, handle, status, 0, NULL);
>>         for(;;)
>> @@ -707,8 +659,7 @@ static efi_status_t exit_boot(struct boot_params 
>> *boot_params, void *handle)
>>  }
>> 
>>  /*
>> - * On success, we return the address of startup_32, which has 
>> potentially been
>> - * relocated by efi_relocate_kernel.
>> + * On success, we return extracted kernel entry point.
>>   * On failure, we exit to the firmware via efi_exit instead of 
>> returning.
>>   */
>>  asmlinkage unsigned long efi_main(efi_handle_t handle,
>> @@ -733,60 +684,6 @@ asmlinkage unsigned long efi_main(efi_handle_t 
>> handle,
>>                 efi_dxe_table = NULL;
>>         }
>> 
>> -       /*
>> -        * If the kernel isn't already loaded at a suitable address,
>> -        * relocate it.
>> -        *
>> -        * It must be loaded above LOAD_PHYSICAL_ADDR.
>> -        *
>> -        * The maximum address for 64-bit is 1 << 46 for 4-level 
>> paging. This
>> -        * is defined as the macro MAXMEM, but unfortunately that is 
>> not a
>> -        * compile-time constant if 5-level paging is configured, so 
>> we instead
>> -        * define our own macro for use here.
>> -        *
>> -        * For 32-bit, the maximum address is complicated to figure 
>> out, for
>> -        * now use KERNEL_IMAGE_SIZE, which will be 512MiB, the same 
>> as what
>> -        * KASLR uses.
>> -        *
>> -        * Also relocate it if image_offset is zero, i.e. the kernel 
>> wasn't
>> -        * loaded by LoadImage, but rather by a bootloader that called 
>> the
>> -        * handover entry. The reason we must always relocate in this 
>> case is
>> -        * to handle the case of systemd-boot booting a unified kernel 
>> image,
>> -        * which is a PE executable that contains the bzImage and an 
>> initrd as
>> -        * COFF sections. The initrd section is placed after the 
>> bzImage
>> -        * without ensuring that there are at least init_size bytes 
>> available
>> -        * for the bzImage, and thus the compressed kernel's startup 
>> code may
>> -        * overwrite the initrd unless it is moved out of the way.
>> -        */
>> -
>> -       buffer_start = ALIGN(bzimage_addr - image_offset,
>> -                            hdr->kernel_alignment);
>> -       buffer_end = buffer_start + hdr->init_size;
>> -
>> -       if ((buffer_start < LOAD_PHYSICAL_ADDR)                        
>>       ||
>> -           (IS_ENABLED(CONFIG_X86_32) && buffer_end > 
>> KERNEL_IMAGE_SIZE)    ||
>> -           (IS_ENABLED(CONFIG_X86_64) && buffer_end > 
>> MAXMEM_X86_64_4LEVEL) ||
>> -           (image_offset == 0)) {
>> -               extern char _bss[];
>> -
>> -               status = efi_relocate_kernel(&bzimage_addr,
>> -                                            (unsigned long)_bss - 
>> bzimage_addr,
>> -                                            hdr->init_size,
>> -                                            hdr->pref_address,
>> -                                            hdr->kernel_alignment,
>> -                                            LOAD_PHYSICAL_ADDR);
>> -               if (status != EFI_SUCCESS) {
>> -                       efi_err("efi_relocate_kernel() failed!\n");
>> -                       goto fail;
>> -               }
>> -               /*
>> -                * Now that we've copied the kernel elsewhere, we no 
>> longer
>> -                * have a set up block before startup_32(), so reset 
>> image_offset
>> -                * to zero in case it was set earlier.
>> -                */
>> -               image_offset = 0;
>> -       }
>> -
>>  #ifdef CONFIG_CMDLINE_BOOL
>>         status = efi_parse_options(CONFIG_CMDLINE);
>>         if (status != EFI_SUCCESS) {
>> @@ -843,7 +740,11 @@ asmlinkage unsigned long efi_main(efi_handle_t 
>> handle,
>> 
>>         setup_efi_pci(boot_params);
>> 
>> -       setup_quirks(boot_params, bzimage_addr, buffer_end - 
>> buffer_start);
>> +       setup_quirks(boot_params);
>> +
>> +       bzimage_addr = extract_kernel_direct(handle, boot_params);
>> +       if (!bzimage_addr)
>> +               goto fail;
>> 
>>         status = exit_boot(boot_params, handle);
>>         if (status != EFI_SUCCESS) {
>> diff --git a/drivers/firmware/efi/libstub/x86-stub.h 
>> b/drivers/firmware/efi/libstub/x86-stub.h
>> new file mode 100644
>> index 000000000000..baecc7c6e602
>> --- /dev/null
>> +++ b/drivers/firmware/efi/libstub/x86-stub.h
>> @@ -0,0 +1,14 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#ifndef _DRIVERS_FIRMWARE_EFI_X86STUB_H
>> +#define _DRIVERS_FIRMWARE_EFI_X86STUB_H
>> +
>> +#include <linux/efi.h>
>> +
>> +#include <asm/bootparam.h>
>> +
>> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status);
>> +unsigned long extract_kernel_direct(efi_handle_t handle, struct 
>> boot_params *boot_params);
>> +void startup_32(struct boot_params *boot_params);
>> +
>> +#endif
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub
  2023-03-09 17:10     ` Evgeniy Baskov
@ 2023-03-09 17:11       ` Ard Biesheuvel
  0 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-09 17:11 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 9 Mar 2023 at 18:10, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-09 19:49, Ard Biesheuvel wrote:
> > On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> Doing it that way allows setting up stricter memory attributes,
> >> simplifies boot code path and removes potential relocation
> >> of kernel image.
> >>
> >> Wire up required interfaces and minimally initialize zero page
> >> fields needed for it to function correctly.
> >>
> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >
> > OK I just realized that there is a problem with this approach: since
> > we now decompress the image while running in the EFI stub (i.e.,
> > before ExitBootServices()), we cannot just randomly pick a
> > EFI_CONVENTIONAL_MEMORY region to place the kernel, we need to
> > allocate the pages using the boot services. Otherwise, subsequent
> > allocations (or concurrent ones occurring in the firmware in event
> > handlers etc) may land right in the middle, which is unlikely to be
> > what we want.
>
> It does allocate pages for the kernel.
> I've marked the place below.
>

Ah excellent, thanks for clearing that up.


> >
> >
> >> ---
> >>  arch/x86/boot/compressed/head_32.S            |  50 ++++-
> >>  arch/x86/boot/compressed/head_64.S            |  58 ++++-
> >>  drivers/firmware/efi/Kconfig                  |   2 +
> >>  drivers/firmware/efi/libstub/Makefile         |   2 +-
> >>  .../firmware/efi/libstub/x86-extract-direct.c | 208
> >> ++++++++++++++++++
> >>  drivers/firmware/efi/libstub/x86-stub.c       | 119 +---------
> >>  drivers/firmware/efi/libstub/x86-stub.h       |  14 ++
> >>  7 files changed, 338 insertions(+), 115 deletions(-)
> >>  create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
> >>  create mode 100644 drivers/firmware/efi/libstub/x86-stub.h
> >>
> >> diff --git a/arch/x86/boot/compressed/head_32.S
> >> b/arch/x86/boot/compressed/head_32.S
> >> index ead6007df1e5..0be75e5072ae 100644
> >> --- a/arch/x86/boot/compressed/head_32.S
> >> +++ b/arch/x86/boot/compressed/head_32.S
> >> @@ -152,11 +152,57 @@ SYM_FUNC_END(startup_32)
> >>
> >>  #ifdef CONFIG_EFI_STUB
> >>  SYM_FUNC_START(efi32_stub_entry)
> >> +/*
> >> + * Calculate the delta between where we were compiled to run
> >> + * at and where we were actually loaded at.  This can only be done
> >> + * with a short local call on x86.  Nothing  else will tell us what
> >> + * address we are running at.  The reserved chunk of the real-mode
> >> + * data at 0x1e4 (defined as a scratch field) are used as the stack
> >> + * for this calculation. Only 4 bytes are needed.
> >> + */
> >> +       call    1f
> >> +1:     popl    %ebx
> >> +       addl    $_GLOBAL_OFFSET_TABLE_+(.-1b), %ebx
> >> +
> >> +       /* Clear BSS */
> >> +       xorl    %eax, %eax
> >> +       leal    _bss@GOTOFF(%ebx), %edi
> >> +       leal    _ebss@GOTOFF(%ebx), %ecx
> >> +       subl    %edi, %ecx
> >> +       shrl    $2, %ecx
> >> +       rep     stosl
> >> +
> >>         add     $0x4, %esp
> >>         movl    8(%esp), %esi   /* save boot_params pointer */
> >> +       movl    %edx, %edi      /* save GOT address */
> >>         call    efi_main
> >> -       /* efi_main returns the possibly relocated address of
> >> startup_32 */
> >> -       jmp     *%eax
> >> +       movl    %eax, %ecx
> >> +
> >> +       /*
> >> +        * efi_main returns the possibly
> >> +        * relocated address of extracted kernel entry point.
> >> +        */
> >> +
> >> +       cli
> >> +
> >> +       /* Load new GDT */
> >> +       leal    gdt@GOTOFF(%ebx), %eax
> >> +       movl    %eax, 2(%eax)
> >> +       lgdt    (%eax)
> >> +
> >> +       /* Load segment registers with our descriptors */
> >> +       movl    $__BOOT_DS, %eax
> >> +       movl    %eax, %ds
> >> +       movl    %eax, %es
> >> +       movl    %eax, %fs
> >> +       movl    %eax, %gs
> >> +       movl    %eax, %ss
> >> +
> >> +       /* Zero EFLAGS */
> >> +       pushl   $0
> >> +       popfl
> >> +
> >> +       jmp     *%ecx
> >>  SYM_FUNC_END(efi32_stub_entry)
> >>  SYM_FUNC_ALIAS(efi_stub_entry, efi32_stub_entry)
> >>  #endif
> >> diff --git a/arch/x86/boot/compressed/head_64.S
> >> b/arch/x86/boot/compressed/head_64.S
> >> index 2dd8be0583d2..7cfef7bd0424 100644
> >> --- a/arch/x86/boot/compressed/head_64.S
> >> +++ b/arch/x86/boot/compressed/head_64.S
> >> @@ -529,12 +529,64 @@ SYM_CODE_END(startup_64)
> >>         .org 0x390
> >>  #endif
> >>  SYM_FUNC_START(efi64_stub_entry)
> >> +       /* Preserve first parameter */
> >> +       movq    %rdi, %r10
> >> +
> >> +       /* Clear BSS */
> >> +       xorl    %eax, %eax
> >> +       leaq    _bss(%rip), %rdi
> >> +       leaq    _ebss(%rip), %rcx
> >> +       subq    %rdi, %rcx
> >> +       shrq    $3, %rcx
> >> +       rep     stosq
> >> +
> >>         and     $~0xf, %rsp                     /* realign the stack
> >> */
> >>         movq    %rdx, %rbx                      /* save boot_params
> >> pointer */
> >> +       movq    %r10, %rdi
> >>         call    efi_main
> >> -       movq    %rbx,%rsi
> >> -       leaq    rva(startup_64)(%rax), %rax
> >> -       jmp     *%rax
> >> +
> >> +       cld
> >> +       cli
> >> +
> >> +       movq    %rbx, %rdi /* boot_params */
> >> +       movq    %rax, %rsi /* decompressed kernel address */
> >> +
> >> +       /* Make sure we have GDT with 32-bit code segment */
> >> +       leaq    gdt64(%rip), %rax
> >> +       addq    %rax, 2(%rax)
> >> +       lgdt    (%rax)
> >> +
> >> +       /* Setup data segments. */
> >> +       xorl    %eax, %eax
> >> +       movl    %eax, %ds
> >> +       movl    %eax, %es
> >> +       movl    %eax, %ss
> >> +       movl    %eax, %fs
> >> +       movl    %eax, %gs
> >> +
> >> +       pushq   %rsi
> >> +       pushq   %rdi
> >> +
> >> +       call    load_stage1_idt
> >> +       call    enable_nx_if_supported
> >> +
> >> +       call    trampoline_pgtable_init
> >> +       movq    %rax, %rdx
> >> +
> >> +
> >> +       /* Swap %rsi and %rsi */
> >> +       popq    %rsi
> >> +       popq    %rdi
> >> +
> >> +       /* Save the trampoline address in RCX */
> >> +       movq    trampoline_32bit(%rip), %rcx
> >> +
> >> +       /* Switch to compatibility mode (CS.L = 0 CS.D = 1) via far
> >> return */
> >> +       pushq   $__KERNEL32_CS
> >> +       leaq    TRAMPOLINE_32BIT_CODE_OFFSET(%rcx), %rax
> >> +       pushq   %rax
> >> +       lretq
> >> +
> >>  SYM_FUNC_END(efi64_stub_entry)
> >>  SYM_FUNC_ALIAS(efi_stub_entry, efi64_stub_entry)
> >>  #endif
> >> diff --git a/drivers/firmware/efi/Kconfig
> >> b/drivers/firmware/efi/Kconfig
> >> index 043ca31c114e..f50c2a84a754 100644
> >> --- a/drivers/firmware/efi/Kconfig
> >> +++ b/drivers/firmware/efi/Kconfig
> >> @@ -58,6 +58,8 @@ config EFI_DXE_MEM_ATTRIBUTES
> >>           Use DXE services to check and alter memory protection
> >>           attributes during boot via EFISTUB to ensure that memory
> >>           ranges used by the kernel are writable and executable.
> >> +         This option also enables stricter memory attributes
> >> +         on compressed kernel PE image.
> >>
> >>  config EFI_PARAMS_FROM_FDT
> >>         bool
> >> diff --git a/drivers/firmware/efi/libstub/Makefile
> >> b/drivers/firmware/efi/libstub/Makefile
> >> index be8b8c6e8b40..99b81c95344c 100644
> >> --- a/drivers/firmware/efi/libstub/Makefile
> >> +++ b/drivers/firmware/efi/libstub/Makefile
> >> @@ -88,7 +88,7 @@ lib-$(CONFIG_EFI_GENERIC_STUB)        += efi-stub.o
> >> string.o intrinsics.o systable.o \
> >>
> >>  lib-$(CONFIG_ARM)              += arm32-stub.o
> >>  lib-$(CONFIG_ARM64)            += arm64.o arm64-stub.o arm64-entry.o
> >> smbios.o
> >> -lib-$(CONFIG_X86)              += x86-stub.o
> >> +lib-$(CONFIG_X86)              += x86-stub.o x86-extract-direct.o
> >>  lib-$(CONFIG_RISCV)            += riscv.o riscv-stub.o
> >>  lib-$(CONFIG_LOONGARCH)                += loongarch.o
> >> loongarch-stub.o
> >>
> >> diff --git a/drivers/firmware/efi/libstub/x86-extract-direct.c
> >> b/drivers/firmware/efi/libstub/x86-extract-direct.c
> >> new file mode 100644
> >> index 000000000000..4ecbc4a9b3ed
> >> --- /dev/null
> >> +++ b/drivers/firmware/efi/libstub/x86-extract-direct.c
> >> @@ -0,0 +1,208 @@
> >> +// SPDX-License-Identifier: GPL-2.0-only
> >> +
> >> +#include <linux/acpi.h>
> >> +#include <linux/efi.h>
> >> +#include <linux/elf.h>
> >> +#include <linux/stddef.h>
> >> +
> >> +#include <asm/efi.h>
> >> +#include <asm/e820/types.h>
> >> +#include <asm/desc.h>
> >> +#include <asm/boot.h>
> >> +#include <asm/bootparam_utils.h>
> >> +#include <asm/shared/extract.h>
> >> +#include <asm/shared/pgtable.h>
> >> +
> >> +#include "efistub.h"
> >> +#include "x86-stub.h"
> >> +
> >> +static efi_handle_t image_handle;
> >> +
> >> +static void do_puthex(unsigned long value)
> >> +{
> >> +       efi_printk("%08lx", value);
> >> +}
> >> +
> >> +static void do_putstr(const char *msg)
> >> +{
> >> +       efi_printk("%s", msg);
> >> +}
> >> +
> >> +static unsigned long do_map_range(unsigned long start,
> >> +                                 unsigned long end,
> >> +                                 unsigned int flags)
> >> +{
> >> +       efi_status_t status;
> >> +
> >> +       unsigned long size = end - start;
> >> +
> >> +       if (flags & MAP_ALLOC) {
> >> +               unsigned long addr;
> >> +
> >> +               status = efi_low_alloc_above(size,
> >> CONFIG_PHYSICAL_ALIGN,
> >> +                                            &addr, start);
>
> Memory for the kernel image is allocated here.
> This function is getting called from the boot/compressed/misc.c with
> MAP_ALLOC flag
> when the address for the kernel is picked.
>
> >> +               if (status != EFI_SUCCESS) {
> >> +                       efi_err("Unable to allocate memory for
> >> uncompressed kernel");
> >> +                       efi_exit(image_handle, EFI_OUT_OF_RESOURCES);
> >> +               }
> >> +
> >> +               if (start != addr) {
> >> +                       efi_debug("Unable to allocate at given
> >> address"
> >> +                                 " (desired=0x%lx, actual=0x%lx)",
> >> +                                 (unsigned long)start, addr);
> >> +                       start = addr;
> >> +               }
> >> +       }
> >> +
> >> +       if ((flags & (MAP_PROTECT | MAP_ALLOC)) &&
> >> +           IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> >> +               unsigned long attr = 0;
> >> +
> >> +               if (!(flags & MAP_EXEC))
> >> +                       attr |= EFI_MEMORY_XP;
> >> +
> >> +               if (!(flags & MAP_WRITE))
> >> +                       attr |= EFI_MEMORY_RO;
> >> +
> >> +               status = efi_adjust_memory_range_protection(start,
> >> size, attr);
> >> +               if (status != EFI_SUCCESS)
> >> +                       efi_err("Unable to protect memory range");
> >> +       }
> >> +
> >> +       return start;
> >> +}
> >> +
> >> +/*
> >> + * Trampoline takes 3 pages and can be loaded in first megabyte of
> >> memory
> >> + * with its end placed between 0 and 640k where BIOS might start.
> >> + * (see arch/x86/boot/compressed/pgtable_64.c)
> >> + */
> >> +
> >> +#ifdef CONFIG_64BIT
> >> +static efi_status_t prepare_trampoline(void)
> >> +{
> >> +       efi_status_t status;
> >> +
> >> +       status = efi_allocate_pages(TRAMPOLINE_32BIT_SIZE,
> >> +                                   (unsigned long
> >> *)&trampoline_32bit,
> >> +                                   TRAMPOLINE_32BIT_PLACEMENT_MAX);
> >> +
> >> +       if (status != EFI_SUCCESS)
> >> +               return status;
> >> +
> >> +       unsigned long trampoline_start = (unsigned
> >> long)trampoline_32bit;
> >> +
> >> +       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
> >> +
> >> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> >> +               /* First page of trampoline is a top level page table
> >> */
> >> +               efi_adjust_memory_range_protection(trampoline_start,
> >> +                                                  PAGE_SIZE,
> >> +                                                  EFI_MEMORY_XP);
> >> +       }
> >> +
> >> +       /* Second page of trampoline is the code (with a padding) */
> >> +
> >> +       void *caddr = (void *)trampoline_32bit +
> >> TRAMPOLINE_32BIT_CODE_OFFSET;
> >> +
> >> +       memcpy(caddr, trampoline_32bit_src,
> >> TRAMPOLINE_32BIT_CODE_SIZE);
> >> +
> >> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> >> +               efi_adjust_memory_range_protection((unsigned
> >> long)caddr,
> >> +                                                  PAGE_SIZE,
> >> +                                                  EFI_MEMORY_RO);
> >> +
> >> +               /* And the last page of trampoline is the stack */
> >> +
> >> +               efi_adjust_memory_range_protection(trampoline_start +
> >> 2 * PAGE_SIZE,
> >> +                                                  PAGE_SIZE,
> >> +                                                  EFI_MEMORY_XP);
> >> +       }
> >> +
> >> +       return EFI_SUCCESS;
> >> +}
> >> +#else
> >> +static inline efi_status_t prepare_trampoline(void)
> >> +{
> >> +       return EFI_SUCCESS;
> >> +}
> >> +#endif
> >> +
> >> +static efi_status_t init_loader_data(efi_handle_t handle,
> >> +                                    struct boot_params *params,
> >> +                                    struct efi_boot_memmap **map)
> >> +{
> >> +       struct efi_info *efi = (void *)&params->efi_info;
> >> +       efi_status_t status;
> >> +
> >> +       status = efi_get_memory_map(map, false);
> >> +
> >> +       if (status != EFI_SUCCESS) {
> >> +               efi_err("Unable to get EFI memory map...\n");
> >> +               return status;
> >> +       }
> >> +
> >> +       const char *signature = efi_is_64bit() ?
> >> EFI64_LOADER_SIGNATURE
> >> +                                              :
> >> EFI32_LOADER_SIGNATURE;
> >> +
> >> +       memcpy(&efi->efi_loader_signature, signature, sizeof(__u32));
> >> +
> >> +       efi->efi_memdesc_size = (*map)->desc_size;
> >> +       efi->efi_memdesc_version = (*map)->desc_ver;
> >> +       efi->efi_memmap_size = (*map)->map_size;
> >> +
> >> +       efi_set_u64_split((unsigned long)(*map)->map,
> >> +                         &efi->efi_memmap, &efi->efi_memmap_hi);
> >> +
> >> +       efi_set_u64_split((unsigned long)efi_system_table,
> >> +                         &efi->efi_systab, &efi->efi_systab_hi);
> >> +
> >> +       image_handle = handle;
> >> +
> >> +       return EFI_SUCCESS;
> >> +}
> >> +
> >> +static void free_loader_data(struct boot_params *params, struct
> >> efi_boot_memmap *map)
> >> +{
> >> +       struct efi_info *efi = (void *)&params->efi_info;
> >> +
> >> +       efi_bs_call(free_pool, map);
> >> +
> >> +       efi->efi_memdesc_size = 0;
> >> +       efi->efi_memdesc_version = 0;
> >> +       efi->efi_memmap_size = 0;
> >> +       efi_set_u64_split(0, &efi->efi_memmap, &efi->efi_memmap_hi);
> >> +}
> >> +
> >> +extern unsigned char input_data[];
> >> +extern unsigned int input_len, output_len;
> >> +
> >> +unsigned long extract_kernel_direct(efi_handle_t handle, struct
> >> boot_params *params)
> >> +{
> >> +
> >> +       void *res;
> >> +       efi_status_t status;
> >> +       struct efi_extract_callbacks cb = { 0 };
> >> +
> >> +       status = prepare_trampoline();
> >> +
> >> +       if (status != EFI_SUCCESS)
> >> +               return 0;
> >> +
> >> +       /* Prepare environment for do_extract_kernel() call */
> >> +       struct efi_boot_memmap *map = NULL;
> >> +       status = init_loader_data(handle, params, &map);
> >> +
> >> +       if (status != EFI_SUCCESS)
> >> +               return 0;
> >> +
> >> +       cb.puthex = do_puthex;
> >> +       cb.putstr = do_putstr;
> >> +       cb.map_range = do_map_range;
> >> +
> >> +       res = efi_extract_kernel(params, &cb, input_data, input_len,
> >> output_len);
> >> +
> >> +       free_loader_data(params, map);
> >> +
> >> +       return (unsigned long)res;
> >> +}
> >> diff --git a/drivers/firmware/efi/libstub/x86-stub.c
> >> b/drivers/firmware/efi/libstub/x86-stub.c
> >> index 7fb1eff88a18..1d1ab1911fd3 100644
> >> --- a/drivers/firmware/efi/libstub/x86-stub.c
> >> +++ b/drivers/firmware/efi/libstub/x86-stub.c
> >> @@ -17,6 +17,7 @@
> >>  #include <asm/boot.h>
> >>
> >>  #include "efistub.h"
> >> +#include "x86-stub.h"
> >>
> >>  /* Maximum physical address for 64-bit kernel with 4-level paging */
> >>  #define MAXMEM_X86_64_4LEVEL (1ull << 46)
> >> @@ -24,7 +25,7 @@
> >>  const efi_system_table_t *efi_system_table;
> >>  const efi_dxe_services_table_t *efi_dxe_table;
> >>  u32 image_offset __section(".data");
> >> -static efi_loaded_image_t *image = NULL;
> >> +static efi_loaded_image_t *image __section(".data");
> >>
> >>  static efi_status_t
> >>  preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct
> >> pci_setup_rom **__rom)
> >> @@ -212,55 +213,9 @@ static void
> >> retrieve_apple_device_properties(struct boot_params *boot_params)
> >>         }
> >>  }
> >>
> >> -/*
> >> - * Trampoline takes 2 pages and can be loaded in first megabyte of
> >> memory
> >> - * with its end placed between 128k and 640k where BIOS might start.
> >> - * (see arch/x86/boot/compressed/pgtable_64.c)
> >> - *
> >> - * We cannot find exact trampoline placement since memory map
> >> - * can be modified by UEFI, and it can alter the computed address.
> >> - */
> >> -
> >> -#define TRAMPOLINE_PLACEMENT_BASE ((128 - 8)*1024)
> >> -#define TRAMPOLINE_PLACEMENT_SIZE (640*1024 - (128 - 8)*1024)
> >> -
> >> -void startup_32(struct boot_params *boot_params);
> >> -
> >> -static void
> >> -setup_memory_protection(unsigned long image_base, unsigned long
> >> image_size)
> >> -{
> >> -       /*
> >> -        * Allow execution of possible trampoline used
> >> -        * for switching between 4- and 5-level page tables
> >> -        * and relocated kernel image.
> >> -        */
> >> -
> >> -       efi_adjust_memory_range_protection(TRAMPOLINE_PLACEMENT_BASE,
> >> -                                          TRAMPOLINE_PLACEMENT_SIZE,
> >> 0);
> >> -
> >> -#ifdef CONFIG_64BIT
> >> -       if (image_base != (unsigned long)startup_32)
> >> -               efi_adjust_memory_range_protection(image_base,
> >> image_size, 0);
> >> -#else
> >> -       /*
> >> -        * Clear protection flags on a whole range of possible
> >> -        * addresses used for KASLR. We don't need to do that
> >> -        * on x86_64, since KASLR/extraction is performed after
> >> -        * dedicated identity page tables are built and we only
> >> -        * need to remove possible protection on relocated image
> >> -        * itself disregarding further relocations.
> >> -        */
> >> -       efi_adjust_memory_range_protection(LOAD_PHYSICAL_ADDR,
> >> -                                          KERNEL_IMAGE_SIZE -
> >> LOAD_PHYSICAL_ADDR,
> >> -                                          0);
> >> -#endif
> >> -}
> >> -
> >>  static const efi_char16_t apple[] = L"Apple";
> >>
> >> -static void setup_quirks(struct boot_params *boot_params,
> >> -                        unsigned long image_base,
> >> -                        unsigned long image_size)
> >> +static void setup_quirks(struct boot_params *boot_params)
> >>  {
> >>         efi_char16_t *fw_vendor = (efi_char16_t *)(unsigned long)
> >>                 efi_table_attr(efi_system_table, fw_vendor);
> >> @@ -269,9 +224,6 @@ static void setup_quirks(struct boot_params
> >> *boot_params,
> >>                 if (IS_ENABLED(CONFIG_APPLE_PROPERTIES))
> >>                         retrieve_apple_device_properties(boot_params);
> >>         }
> >> -
> >> -       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES))
> >> -               setup_memory_protection(image_base, image_size);
> >>  }
> >>
> >>  /*
> >> @@ -384,7 +336,7 @@ static void setup_graphics(struct boot_params
> >> *boot_params)
> >>  }
> >>
> >>
> >> -static void __noreturn efi_exit(efi_handle_t handle, efi_status_t
> >> status)
> >> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
> >>  {
> >>         efi_bs_call(exit, handle, status, 0, NULL);
> >>         for(;;)
> >> @@ -707,8 +659,7 @@ static efi_status_t exit_boot(struct boot_params
> >> *boot_params, void *handle)
> >>  }
> >>
> >>  /*
> >> - * On success, we return the address of startup_32, which has
> >> potentially been
> >> - * relocated by efi_relocate_kernel.
> >> + * On success, we return extracted kernel entry point.
> >>   * On failure, we exit to the firmware via efi_exit instead of
> >> returning.
> >>   */
> >>  asmlinkage unsigned long efi_main(efi_handle_t handle,
> >> @@ -733,60 +684,6 @@ asmlinkage unsigned long efi_main(efi_handle_t
> >> handle,
> >>                 efi_dxe_table = NULL;
> >>         }
> >>
> >> -       /*
> >> -        * If the kernel isn't already loaded at a suitable address,
> >> -        * relocate it.
> >> -        *
> >> -        * It must be loaded above LOAD_PHYSICAL_ADDR.
> >> -        *
> >> -        * The maximum address for 64-bit is 1 << 46 for 4-level
> >> paging. This
> >> -        * is defined as the macro MAXMEM, but unfortunately that is
> >> not a
> >> -        * compile-time constant if 5-level paging is configured, so
> >> we instead
> >> -        * define our own macro for use here.
> >> -        *
> >> -        * For 32-bit, the maximum address is complicated to figure
> >> out, for
> >> -        * now use KERNEL_IMAGE_SIZE, which will be 512MiB, the same
> >> as what
> >> -        * KASLR uses.
> >> -        *
> >> -        * Also relocate it if image_offset is zero, i.e. the kernel
> >> wasn't
> >> -        * loaded by LoadImage, but rather by a bootloader that called
> >> the
> >> -        * handover entry. The reason we must always relocate in this
> >> case is
> >> -        * to handle the case of systemd-boot booting a unified kernel
> >> image,
> >> -        * which is a PE executable that contains the bzImage and an
> >> initrd as
> >> -        * COFF sections. The initrd section is placed after the
> >> bzImage
> >> -        * without ensuring that there are at least init_size bytes
> >> available
> >> -        * for the bzImage, and thus the compressed kernel's startup
> >> code may
> >> -        * overwrite the initrd unless it is moved out of the way.
> >> -        */
> >> -
> >> -       buffer_start = ALIGN(bzimage_addr - image_offset,
> >> -                            hdr->kernel_alignment);
> >> -       buffer_end = buffer_start + hdr->init_size;
> >> -
> >> -       if ((buffer_start < LOAD_PHYSICAL_ADDR)
> >>       ||
> >> -           (IS_ENABLED(CONFIG_X86_32) && buffer_end >
> >> KERNEL_IMAGE_SIZE)    ||
> >> -           (IS_ENABLED(CONFIG_X86_64) && buffer_end >
> >> MAXMEM_X86_64_4LEVEL) ||
> >> -           (image_offset == 0)) {
> >> -               extern char _bss[];
> >> -
> >> -               status = efi_relocate_kernel(&bzimage_addr,
> >> -                                            (unsigned long)_bss -
> >> bzimage_addr,
> >> -                                            hdr->init_size,
> >> -                                            hdr->pref_address,
> >> -                                            hdr->kernel_alignment,
> >> -                                            LOAD_PHYSICAL_ADDR);
> >> -               if (status != EFI_SUCCESS) {
> >> -                       efi_err("efi_relocate_kernel() failed!\n");
> >> -                       goto fail;
> >> -               }
> >> -               /*
> >> -                * Now that we've copied the kernel elsewhere, we no
> >> longer
> >> -                * have a set up block before startup_32(), so reset
> >> image_offset
> >> -                * to zero in case it was set earlier.
> >> -                */
> >> -               image_offset = 0;
> >> -       }
> >> -
> >>  #ifdef CONFIG_CMDLINE_BOOL
> >>         status = efi_parse_options(CONFIG_CMDLINE);
> >>         if (status != EFI_SUCCESS) {
> >> @@ -843,7 +740,11 @@ asmlinkage unsigned long efi_main(efi_handle_t
> >> handle,
> >>
> >>         setup_efi_pci(boot_params);
> >>
> >> -       setup_quirks(boot_params, bzimage_addr, buffer_end -
> >> buffer_start);
> >> +       setup_quirks(boot_params);
> >> +
> >> +       bzimage_addr = extract_kernel_direct(handle, boot_params);
> >> +       if (!bzimage_addr)
> >> +               goto fail;
> >>
> >>         status = exit_boot(boot_params, handle);
> >>         if (status != EFI_SUCCESS) {
> >> diff --git a/drivers/firmware/efi/libstub/x86-stub.h
> >> b/drivers/firmware/efi/libstub/x86-stub.h
> >> new file mode 100644
> >> index 000000000000..baecc7c6e602
> >> --- /dev/null
> >> +++ b/drivers/firmware/efi/libstub/x86-stub.h
> >> @@ -0,0 +1,14 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +
> >> +#ifndef _DRIVERS_FIRMWARE_EFI_X86STUB_H
> >> +#define _DRIVERS_FIRMWARE_EFI_X86STUB_H
> >> +
> >> +#include <linux/efi.h>
> >> +
> >> +#include <asm/bootparam.h>
> >> +
> >> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status);
> >> +unsigned long extract_kernel_direct(efi_handle_t handle, struct
> >> boot_params *boot_params);
> >> +void startup_32(struct boot_params *boot_params);
> >> +
> >> +#endif
> >> --
> >> 2.37.4
> >>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 19/26] x86/build: Cleanup tools/build.c
  2023-03-09 16:50       ` Ard Biesheuvel
@ 2023-03-09 17:22         ` Evgeniy Baskov
  2023-03-09 17:37           ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-09 17:22 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-09 19:50, Ard Biesheuvel wrote:
> On Thu, 9 Mar 2023 at 17:25, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> On 2023-03-09 18:57, Ard Biesheuvel wrote:
>> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> >>
>> >> Use newer C standard. Since kernel requires C99 compiler now,
>> >> we can make use of the new features to make the core more readable.
>> >>
>> >> Use mmap() for reading files also to make things simpler.
>> >>
>> >> Replace most magic numbers with defines.
>> >>
>> >> Should have no functional changes. This is done in preparation for the
>> >> next changes that makes generated PE header more spec compliant.
>> >>
>> >> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> >> Tested-by: Peter Jones <pjones@redhat.com>
>> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> >> ---
>> >>  arch/x86/boot/tools/build.c | 387
>> >> +++++++++++++++++++++++-------------
>> >>  1 file changed, 245 insertions(+), 142 deletions(-)
>> >>
>> >> diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
>> >> index bd247692b701..fbc5315af032 100644
>> >> --- a/arch/x86/boot/tools/build.c
>> >> +++ b/arch/x86/boot/tools/build.c
>> >> @@ -25,20 +25,21 @@
>> >>   * Substantially overhauled by H. Peter Anvin, April 2007
>> >>   */
>> >>
>> >> +#include <fcntl.h>
>> >> +#include <stdarg.h>
>> >> +#include <stdint.h>
>> >>  #include <stdio.h>
>> >> -#include <string.h>
>> >>  #include <stdlib.h>
>> >> -#include <stdarg.h>
>> >> -#include <sys/types.h>
>> >> +#include <string.h>
>> >> +#include <sys/mman.h>
>> >>  #include <sys/stat.h>
>> >> +#include <sys/types.h>
>> >>  #include <unistd.h>
>> >> -#include <fcntl.h>
>> >> -#include <sys/mman.h>
>> >> +
>> >>  #include <tools/le_byteshift.h>
>> >> +#include <linux/pe.h>
>> >>
>> >> -typedef unsigned char  u8;
>> >> -typedef unsigned short u16;
>> >> -typedef unsigned int   u32;
>> >> +#define round_up(x, n) (((x) + (n) - 1) & ~((n) - 1))
>> >>
>> >>  #define DEFAULT_MAJOR_ROOT 0
>> >>  #define DEFAULT_MINOR_ROOT 0
>> >> @@ -48,8 +49,13 @@ typedef unsigned int   u32;
>> >>  #define SETUP_SECT_MIN 5
>> >>  #define SETUP_SECT_MAX 64
>> >>
>> >> +#define PARAGRAPH_SIZE 16
>> >> +#define SECTOR_SIZE 512
>> >> +#define FILE_ALIGNMENT 512
>> >> +#define SECTION_ALIGNMENT 4096
>> >> +
>> >>  /* This must be large enough to hold the entire setup */
>> >> -u8 buf[SETUP_SECT_MAX*512];
>> >> +uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
>> >>
>> >>  #define PECOFF_RELOC_RESERVE 0x20
>> >>
>> >> @@ -59,6 +65,52 @@ u8 buf[SETUP_SECT_MAX*512];
>> >>  #define PECOFF_COMPAT_RESERVE 0x0
>> >>  #endif
>> >>
>> >> +#define RELOC_SECTION_SIZE 10
>> >> +
>> >> +/* PE header has different format depending on the architecture */
>> >> +#ifdef CONFIG_X86_64
>> >> +typedef struct pe32plus_opt_hdr pe_opt_hdr;
>> >> +#else
>> >> +typedef struct pe32_opt_hdr pe_opt_hdr;
>> >> +#endif
>> >> +
>> >> +static inline struct pe_hdr *get_pe_header(uint8_t *buf)
>> >> +{
>> >> +       uint32_t pe_offset =
>> >> get_unaligned_le32(buf+MZ_HEADER_PEADDR_OFFSET);
>> >> +       return (struct pe_hdr *)(buf + pe_offset);
>> >> +}
>> >> +
>> >> +static inline pe_opt_hdr *get_pe_opt_header(uint8_t *buf)
>> >> +{
>> >> +       return (pe_opt_hdr *)(get_pe_header(buf) + 1);
>> >> +}
>> >> +
>> >> +static inline struct section_header *get_sections(uint8_t *buf)
>> >> +{
>> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>> >> +       uint32_t n_data_dirs = get_unaligned_le32(&hdr->data_dirs);
>> >> +       uint8_t *sections = (uint8_t *)(hdr + 1) +
>> >> n_data_dirs*sizeof(struct data_dirent);
>> >> +       return  (struct section_header *)sections;
>> >> +}
>> >> +
>> >> +static inline struct data_directory *get_data_dirs(uint8_t *buf)
>> >> +{
>> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>> >> +       return (struct data_directory *)(hdr + 1);
>> >> +}
>> >> +
>> >> +#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
>> >
>> > Can we drop this conditional?
>> 
>> Without CONFIG_EFI_DXE_MEM_ATTRIBUTES memory attributes are not
>> getting applies anywhere, so this would break 'nokaslr' on UEFI
>> implementations that honor section attributes.
>> 
> 
> How so? This only affects the mappings that are created by UEFI for
> the decompressor binary, right?

I was thinking about the in-place decompression, but now I've realized
that I was wrong since in-place decompression cannot happen when booting
via the stub. I'll remove the ifdef.

> 
>> KASLR is already broken without that option on implementations
>> that disallow execution of the free memory though. But unlike
>> free memory, sections are more likely to get protected, I think.
>> 
> 
> We need to allocate those pages properly in any case (see my other
> reply) so it is no longer free memory.

It should be fine, as I explained.

The only thing that is a little unexpected is that the kernel might
shift even with 'nokaslr' when the LOAD_PHYSICAL_ADDR is already taken
by some firmware allocation (or by us). This should cause no real
problems, since the kernel is required to be relocatable for the 
EFISTUB.

> 
>> >> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE |
>> >> IMAGE_SCN_ALIGN_4096BYTES)
>> >> +#define SCN_RX (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_EXECUTE |
>> >> IMAGE_SCN_ALIGN_4096BYTES)
>> >> +#define SCN_RO (IMAGE_SCN_MEM_READ | IMAGE_SCN_ALIGN_4096BYTES)
>> >
>> > Please drop the alignment flags - they don't apply to executable only
>> > object files.
>> 
>> Got it, will remove them in v5.
>> 
>> >
>> >> +#else
>> >> +/* With memory protection disabled all sections are RWX */
>> >> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | \
>> >> +               IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
>> >> +#define SCN_RX SCN_RW
>> >> +#define SCN_RO SCN_RW
>> >> +#endif
>> >> +
>> >>  static unsigned long efi32_stub_entry;
>> >>  static unsigned long efi64_stub_entry;
>> >>  static unsigned long efi_pe_entry;
>> >> @@ -70,7 +122,7 @@ static unsigned long _end;
>> >>
>> >>
>> >> /*----------------------------------------------------------------------*/
>> >>
>> >> -static const u32 crctab32[] = {
>> >> +static const uint32_t crctab32[] = {
>> >
>> > Replacing all the type names makes this patch very messy. Can we back
>> > that out please?
>> 
>> Ok, I will revert them.
>> 
>> >
>> >>         0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
>> >>         0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
>> >>         0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
>> >> @@ -125,12 +177,12 @@ static const u32 crctab32[] = {
>> >>         0x2d02ef8d
>> >>  };
>> >>
>> >> -static u32 partial_crc32_one(u8 c, u32 crc)
>> >> +static uint32_t partial_crc32_one(uint8_t c, uint32_t crc)
>> >>  {
>> >>         return crctab32[(crc ^ c) & 0xff] ^ (crc >> 8);
>> >>  }
>> >>
>> >> -static u32 partial_crc32(const u8 *s, int len, u32 crc)
>> >> +static uint32_t partial_crc32(const uint8_t *s, int len, uint32_t
>> >> crc)
>> >>  {
>> >>         while (len--)
>> >>                 crc = partial_crc32_one(*s++, crc);
>> >> @@ -152,57 +204,106 @@ static void usage(void)
>> >>         die("Usage: build setup system zoffset.h image");
>> >>  }
>> >>
>> >> +static void *map_file(const char *path, size_t *psize)
>> >> +{
>> >> +       struct stat statbuf;
>> >> +       size_t size;
>> >> +       void *addr;
>> >> +       int fd;
>> >> +
>> >> +       fd = open(path, O_RDONLY);
>> >> +       if (fd < 0)
>> >> +               die("Unable to open `%s': %m", path);
>> >> +       if (fstat(fd, &statbuf))
>> >> +               die("Unable to stat `%s': %m", path);
>> >> +
>> >> +       size = statbuf.st_size;
>> >> +       /*
>> >> +        * Map one byte more, to allow adding null-terminator
>> >> +        * for text files.
>> >> +        */
>> >> +       addr = mmap(NULL, size + 1, PROT_READ | PROT_WRITE,
>> >> MAP_PRIVATE, fd, 0);
>> >> +       if (addr == MAP_FAILED)
>> >> +               die("Unable to mmap '%s': %m", path);
>> >> +
>> >> +       close(fd);
>> >> +
>> >> +       *psize = size;
>> >> +       return addr;
>> >> +}
>> >> +
>> >> +static void unmap_file(void *addr, size_t size)
>> >> +{
>> >> +       munmap(addr, size + 1);
>> >> +}
>> >> +
>> >> +static void *map_output_file(const char *path, size_t size)
>> >> +{
>> >> +       void *addr;
>> >> +       int fd;
>> >> +
>> >> +       fd = open(path, O_RDWR | O_CREAT, 0660);
>> >> +       if (fd < 0)
>> >> +               die("Unable to create `%s': %m", path);
>> >> +
>> >> +       if (ftruncate(fd, size))
>> >> +               die("Unable to resize `%s': %m", path);
>> >> +
>> >> +       addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED,
>> >> fd, 0);
>> >> +       if (addr == MAP_FAILED)
>> >> +               die("Unable to mmap '%s': %m", path);
>> >> +
>> >> +       return addr;
>> >> +}
>> >> +
>> >>  #ifdef CONFIG_EFI_STUB
>> >>
>> >> -static void update_pecoff_section_header_fields(char *section_name,
>> >> u32 vma, u32 size, u32 datasz, u32 offset)
>> >> +static void update_pecoff_section_header_fields(char *section_name,
>> >> uint32_t vma,
>> >> +                                               uint32_t size,
>> >> uint32_t datasz,
>> >> +                                               uint32_t offset)
>> >>  {
>> >>         unsigned int pe_header;
>> >>         unsigned short num_sections;
>> >> -       u8 *section;
>> >> +       struct section_header *section;
>> >>
>> >> -       pe_header = get_unaligned_le32(&buf[0x3c]);
>> >> -       num_sections = get_unaligned_le16(&buf[pe_header + 6]);
>> >> -
>> >> -#ifdef CONFIG_X86_32
>> >> -       section = &buf[pe_header + 0xa8];
>> >> -#else
>> >> -       section = &buf[pe_header + 0xb8];
>> >> -#endif
>> >> +       struct pe_hdr *hdr = get_pe_header(buf);
>> >> +       num_sections = get_unaligned_le16(&hdr->sections);
>> >> +       section = get_sections(buf);
>> >>
>> >>         while (num_sections > 0) {
>> >> -               if (strncmp((char*)section, section_name, 8) == 0) {
>> >> +               if (strncmp(section->name, section_name, 8) == 0) {
>> >>                         /* section header size field */
>> >> -                       put_unaligned_le32(size, section + 0x8);
>> >> +                       put_unaligned_le32(size,
>> >> &section->virtual_size);
>> >>
>> >>                         /* section header vma field */
>> >> -                       put_unaligned_le32(vma, section + 0xc);
>> >> +                       put_unaligned_le32(vma,
>> >> &section->virtual_address);
>> >>
>> >>                         /* section header 'size of initialised data'
>> >> field */
>> >> -                       put_unaligned_le32(datasz, section + 0x10);
>> >> +                       put_unaligned_le32(datasz,
>> >> &section->raw_data_size);
>> >>
>> >>                         /* section header 'file offset' field */
>> >> -                       put_unaligned_le32(offset, section + 0x14);
>> >> +                       put_unaligned_le32(offset,
>> >> &section->data_addr);
>> >>
>> >>                         break;
>> >>                 }
>> >> -               section += 0x28;
>> >> +               section++;
>> >>                 num_sections--;
>> >>         }
>> >>  }
>> >>
>> >> -static void update_pecoff_section_header(char *section_name, u32
>> >> offset, u32 size)
>> >> +static void update_pecoff_section_header(char *section_name, uint32_t
>> >> offset, uint32_t size)
>> >>  {
>> >>         update_pecoff_section_header_fields(section_name, offset,
>> >> size, size, offset);
>> >>  }
>> >>
>> >>  static void update_pecoff_setup_and_reloc(unsigned int size)
>> >>  {
>> >> -       u32 setup_offset = 0x200;
>> >> -       u32 reloc_offset = size - PECOFF_RELOC_RESERVE -
>> >> PECOFF_COMPAT_RESERVE;
>> >> +       uint32_t setup_offset = SECTOR_SIZE;
>> >> +       uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE -
>> >> PECOFF_COMPAT_RESERVE;
>> >>  #ifdef CONFIG_EFI_MIXED
>> >> -       u32 compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
>> >> +       uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
>> >>  #endif
>> >> -       u32 setup_size = reloc_offset - setup_offset;
>> >> +       uint32_t setup_size = reloc_offset - setup_offset;
>> >>
>> >>         update_pecoff_section_header(".setup", setup_offset,
>> >> setup_size);
>> >>         update_pecoff_section_header(".reloc", reloc_offset,
>> >> PECOFF_RELOC_RESERVE);
>> >> @@ -211,8 +312,8 @@ static void update_pecoff_setup_and_reloc(unsigned
>> >> int size)
>> >>          * Modify .reloc section contents with a single entry. The
>> >>          * relocation is applied to offset 10 of the relocation
>> >> section.
>> >>          */
>> >> -       put_unaligned_le32(reloc_offset + 10, &buf[reloc_offset]);
>> >> -       put_unaligned_le32(10, &buf[reloc_offset + 4]);
>> >> +       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE,
>> >> &buf[reloc_offset]);
>> >> +       put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset +
>> >> 4]);
>> >>
>> >>  #ifdef CONFIG_EFI_MIXED
>> >>         update_pecoff_section_header(".compat", compat_offset,
>> >> PECOFF_COMPAT_RESERVE);
>> >> @@ -224,19 +325,17 @@ static void
>> >> update_pecoff_setup_and_reloc(unsigned int size)
>> >>          */
>> >>         buf[compat_offset] = 0x1;
>> >>         buf[compat_offset + 1] = 0x8;
>> >> -       put_unaligned_le16(0x14c, &buf[compat_offset + 2]);
>> >> +       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset
>> >> + 2]);
>> >>         put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset +
>> >> 4]);
>> >>  #endif
>> >>  }
>> >>
>> >> -static void update_pecoff_text(unsigned int text_start, unsigned int
>> >> file_sz,
>> >> +static unsigned int update_pecoff_sections(unsigned int text_start,
>> >> unsigned int text_sz,
>> >>                                unsigned int init_sz)
>> >>  {
>> >> -       unsigned int pe_header;
>> >> -       unsigned int text_sz = file_sz - text_start;
>> >> +       unsigned int file_sz = text_start + text_sz;
>> >>         unsigned int bss_sz = init_sz - file_sz;
>> >> -
>> >> -       pe_header = get_unaligned_le32(&buf[0x3c]);
>> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>> >>
>> >>         /*
>> >>          * The PE/COFF loader may load the image at an address which
>> >> is
>> >> @@ -254,18 +353,20 @@ static void update_pecoff_text(unsigned int
>> >> text_start, unsigned int file_sz,
>> >>          * Size of code: Subtract the size of the first sector (512
>> >> bytes)
>> >>          * which includes the header.
>> >>          */
>> >> -       put_unaligned_le32(file_sz - 512 + bss_sz, &buf[pe_header +
>> >> 0x1c]);
>> >> +       put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz,
>> >> &hdr->text_size);
>> >>
>> >>         /* Size of image */
>> >> -       put_unaligned_le32(init_sz, &buf[pe_header + 0x50]);
>> >> +       put_unaligned_le32(init_sz, &hdr->image_size);
>> >>
>> >>         /*
>> >>          * Address of entry point for PE/COFF executable
>> >>          */
>> >> -       put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header +
>> >> 0x28]);
>> >> +       put_unaligned_le32(text_start + efi_pe_entry,
>> >> &hdr->entry_point);
>> >>
>> >>         update_pecoff_section_header_fields(".text", text_start,
>> >> text_sz + bss_sz,
>> >>                                             text_sz, text_start);
>> >> +
>> >> +       return text_start + file_sz;
>> >>  }
>> >>
>> >>  static int reserve_pecoff_reloc_section(int c)
>> >> @@ -275,7 +376,7 @@ static int reserve_pecoff_reloc_section(int c)
>> >>         return PECOFF_RELOC_RESERVE;
>> >>  }
>> >>
>> >> -static void efi_stub_defaults(void)
>> >> +static void efi_stub_update_defaults(void)
>> >>  {
>> >>         /* Defaults for old kernel */
>> >>  #ifdef CONFIG_X86_32
>> >> @@ -298,7 +399,7 @@ static void efi_stub_entry_update(void)
>> >>
>> >>  #ifdef CONFIG_EFI_MIXED
>> >>         if (efi32_stub_entry != addr)
>> >> -               die("32-bit and 64-bit EFI entry points do not
>> >> match\n");
>> >> +               die("32-bit and 64-bit EFI entry points do not
>> >> match");
>> >>  #endif
>> >>  #endif
>> >>         put_unaligned_le32(addr, &buf[0x264]);
>> >> @@ -310,7 +411,7 @@ static inline void
>> >> update_pecoff_setup_and_reloc(unsigned int size) {}
>> >>  static inline void update_pecoff_text(unsigned int text_start,
>> >>                                       unsigned int file_sz,
>> >>                                       unsigned int init_sz) {}
>> >> -static inline void efi_stub_defaults(void) {}
>> >> +static inline void efi_stub_update_defaults(void) {}
>> >>  static inline void efi_stub_entry_update(void) {}
>> >>
>> >>  static inline int reserve_pecoff_reloc_section(int c)
>> >> @@ -338,20 +439,15 @@ static int reserve_pecoff_compat_section(int c)
>> >>
>> >>  static void parse_zoffset(char *fname)
>> >>  {
>> >> -       FILE *file;
>> >> -       char *p;
>> >> -       int c;
>> >> +       size_t size;
>> >> +       char *data, *p;
>> >>
>> >> -       file = fopen(fname, "r");
>> >> -       if (!file)
>> >> -               die("Unable to open `%s': %m", fname);
>> >> -       c = fread(buf, 1, sizeof(buf) - 1, file);
>> >> -       if (ferror(file))
>> >> -               die("read-error on `zoffset.h'");
>> >> -       fclose(file);
>> >> -       buf[c] = 0;
>> >> +       data = map_file(fname, &size);
>> >>
>> >> -       p = (char *)buf;
>> >> +       /* We can do that, since we mapped one byte more */
>> >> +       data[size] = 0;
>> >> +
>> >> +       p = (char *)data;
>> >>
>> >>         while (p && *p) {
>> >>                 PARSE_ZOFS(p, efi32_stub_entry);
>> >> @@ -367,82 +463,99 @@ static void parse_zoffset(char *fname)
>> >>                 while (p && (*p == '\r' || *p == '\n'))
>> >>                         p++;
>> >>         }
>> >> +
>> >> +       unmap_file(data, size);
>> >>  }
>> >>
>> >> -int main(int argc, char ** argv)
>> >> +static unsigned int read_setup(char *path)
>> >>  {
>> >> -       unsigned int i, sz, setup_sectors, init_sz;
>> >> -       int c;
>> >> -       u32 sys_size;
>> >> -       struct stat sb;
>> >> -       FILE *file, *dest;
>> >> -       int fd;
>> >> -       void *kernel;
>> >> -       u32 crc = 0xffffffffUL;
>> >> -
>> >> -       efi_stub_defaults();
>> >> -
>> >> -       if (argc != 5)
>> >> -               usage();
>> >> -       parse_zoffset(argv[3]);
>> >> -
>> >> -       dest = fopen(argv[4], "w");
>> >> -       if (!dest)
>> >> -               die("Unable to write `%s': %m", argv[4]);
>> >> +       FILE *file;
>> >> +       unsigned int setup_size, file_size;
>> >>
>> >>         /* Copy the setup code */
>> >> -       file = fopen(argv[1], "r");
>> >> +       file = fopen(path, "r");
>> >>         if (!file)
>> >> -               die("Unable to open `%s': %m", argv[1]);
>> >> -       c = fread(buf, 1, sizeof(buf), file);
>> >> +               die("Unable to open `%s': %m", path);
>> >> +
>> >> +       file_size = fread(buf, 1, sizeof(buf), file);
>> >>         if (ferror(file))
>> >>                 die("read-error on `setup'");
>> >> -       if (c < 1024)
>> >> +
>> >> +       if (file_size < 2 * SECTOR_SIZE)
>> >>                 die("The setup must be at least 1024 bytes");
>> >> -       if (get_unaligned_le16(&buf[510]) != 0xAA55)
>> >> +
>> >> +       if (get_unaligned_le16(&buf[SECTOR_SIZE - 2]) != 0xAA55)
>> >>                 die("Boot block hasn't got boot flag (0xAA55)");
>> >> +
>> >>         fclose(file);
>> >>
>> >> -       c += reserve_pecoff_compat_section(c);
>> >> -       c += reserve_pecoff_reloc_section(c);
>> >> +       /* Reserve space for PE sections */
>> >> +       file_size += reserve_pecoff_compat_section(file_size);
>> >> +       file_size += reserve_pecoff_reloc_section(file_size);
>> >>
>> >>         /* Pad unused space with zeros */
>> >> -       setup_sectors = (c + 511) / 512;
>> >> -       if (setup_sectors < SETUP_SECT_MIN)
>> >> -               setup_sectors = SETUP_SECT_MIN;
>> >> -       i = setup_sectors*512;
>> >> -       memset(buf+c, 0, i-c);
>> >>
>> >> -       update_pecoff_setup_and_reloc(i);
>> >> +       setup_size = round_up(file_size, SECTOR_SIZE);
>> >> +
>> >> +       if (setup_size < SETUP_SECT_MIN * SECTOR_SIZE)
>> >> +               setup_size = SETUP_SECT_MIN * SECTOR_SIZE;
>> >> +
>> >> +       /*
>> >> +        * Global buffer is already initialised
>> >> +        * to 0, but just in case, zero out padding.
>> >> +        */
>> >> +
>> >> +       memset(buf + file_size, 0, setup_size - file_size);
>> >> +
>> >> +       return setup_size;
>> >> +}
>> >> +
>> >> +int main(int argc, char **argv)
>> >> +{
>> >> +       size_t kern_file_size;
>> >> +       unsigned int setup_size;
>> >> +       unsigned int setup_sectors;
>> >> +       unsigned int init_size;
>> >> +       unsigned int total_size;
>> >> +       unsigned int kern_size;
>> >> +       void *kernel;
>> >> +       uint32_t crc = 0xffffffffUL;
>> >> +       uint8_t *output;
>> >> +
>> >> +       if (argc != 5)
>> >> +               usage();
>> >> +
>> >> +       efi_stub_update_defaults();
>> >> +       parse_zoffset(argv[3]);
>> >> +
>> >> +       setup_size = read_setup(argv[1]);
>> >> +
>> >> +       setup_sectors = setup_size/SECTOR_SIZE;
>> >>
>> >>         /* Set the default root device */
>> >>         put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]);
>> >>
>> >> -       /* Open and stat the kernel file */
>> >> -       fd = open(argv[2], O_RDONLY);
>> >> -       if (fd < 0)
>> >> -               die("Unable to open `%s': %m", argv[2]);
>> >> -       if (fstat(fd, &sb))
>> >> -               die("Unable to stat `%s': %m", argv[2]);
>> >> -       sz = sb.st_size;
>> >> -       kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0);
>> >> -       if (kernel == MAP_FAILED)
>> >> -               die("Unable to mmap '%s': %m", argv[2]);
>> >> -       /* Number of 16-byte paragraphs, including space for a 4-byte
>> >> CRC */
>> >> -       sys_size = (sz + 15 + 4) / 16;
>> >> +       /* Map kernel file to memory */
>> >> +       kernel = map_file(argv[2], &kern_file_size);
>> >> +
>> >>  #ifdef CONFIG_EFI_STUB
>> >> -       /*
>> >> -        * COFF requires minimum 32-byte alignment of sections, and
>> >> -        * adding a signature is problematic without that alignment.
>> >> -        */
>> >> -       sys_size = (sys_size + 1) & ~1;
>> >> +       /* PE specification require 512-byte minimum section file
>> >> alignment */
>> >> +       kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
>> >> +       update_pecoff_setup_and_reloc(setup_size);
>> >> +#else
>> >> +       /* Number of 16-byte paragraphs, including space for a 4-byte
>> >> CRC */
>> >> +       kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
>> >>  #endif
>> >>
>> >>         /* Patch the setup code with the appropriate size parameters
>> >> */
>> >> -       buf[0x1f1] = setup_sectors-1;
>> >> -       put_unaligned_le32(sys_size, &buf[0x1f4]);
>> >> +       buf[0x1f1] = setup_sectors - 1;
>> >> +       put_unaligned_le32(kern_size/PARAGRAPH_SIZE, &buf[0x1f4]);
>> >> +
>> >> +       /* Update kernel_info offset. */
>> >> +       put_unaligned_le32(kernel_info, &buf[0x268]);
>> >> +
>> >> +       init_size = get_unaligned_le32(&buf[0x260]);
>> >>
>> >> -       init_sz = get_unaligned_le32(&buf[0x260]);
>> >>  #ifdef CONFIG_EFI_STUB
>> >>         /*
>> >>          * The decompression buffer will start at ImageBase. When
>> >> relocating
>> >> @@ -458,45 +571,35 @@ int main(int argc, char ** argv)
>> >>          * For future-proofing, increase init_sz if necessary.
>> >>          */
>> >>
>> >> -       if (init_sz - _end < i + _ehead) {
>> >> -               init_sz = (i + _ehead + _end + 4095) & ~4095;
>> >> -               put_unaligned_le32(init_sz, &buf[0x260]);
>> >> +       if (init_size - _end < setup_size + _ehead) {
>> >> +               init_size = round_up(setup_size + _ehead + _end,
>> >> SECTION_ALIGNMENT);
>> >> +               put_unaligned_le32(init_size, &buf[0x260]);
>> >>         }
>> >> -#endif
>> >> -       update_pecoff_text(setup_sectors * 512, i + (sys_size * 16),
>> >> init_sz);
>> >>
>> >> -       efi_stub_entry_update();
>> >> -
>> >> -       /* Update kernel_info offset. */
>> >> -       put_unaligned_le32(kernel_info, &buf[0x268]);
>> >> +       total_size = update_pecoff_sections(setup_size, kern_size,
>> >> init_size);
>> >>
>> >> -       crc = partial_crc32(buf, i, crc);
>> >> -       if (fwrite(buf, 1, i, dest) != i)
>> >> -               die("Writing setup failed");
>> >> +       efi_stub_entry_update();
>> >> +#else
>> >> +       (void)init_size;
>> >> +       total_size = setup_size + kern_size;
>> >> +#endif
>> >>
>> >> -       /* Copy the kernel code */
>> >> -       crc = partial_crc32(kernel, sz, crc);
>> >> -       if (fwrite(kernel, 1, sz, dest) != sz)
>> >> -               die("Writing kernel failed");
>> >> +       output = map_output_file(argv[4], total_size);
>> >>
>> >> -       /* Add padding leaving 4 bytes for the checksum */
>> >> -       while (sz++ < (sys_size*16) - 4) {
>> >> -               crc = partial_crc32_one('\0', crc);
>> >> -               if (fwrite("\0", 1, 1, dest) != 1)
>> >> -                       die("Writing padding failed");
>> >> -       }
>> >> +       memcpy(output, buf, setup_size);
>> >> +       memcpy(output + setup_size, kernel, kern_file_size);
>> >> +       memset(output + setup_size + kern_file_size, 0, kern_size -
>> >> kern_file_size);
>> >>
>> >> -       /* Write the CRC */
>> >> -       put_unaligned_le32(crc, buf);
>> >> -       if (fwrite(buf, 1, 4, dest) != 4)
>> >> -               die("Writing CRC failed");
>> >> +       /* Calculate and write kernel checksum. */
>> >> +       crc = partial_crc32(output, total_size - 4, crc);
>> >> +       put_unaligned_le32(crc, &output[total_size - 4]);
>> >>
>> >> -       /* Catch any delayed write failures */
>> >> -       if (fclose(dest))
>> >> -               die("Writing image failed");
>> >> +       /* Catch any delayed write failures. */
>> >> +       if (munmap(output, total_size) < 0)
>> >> +               die("Writing kernel failed");
>> >>
>> >> -       close(fd);
>> >> +       unmap_file(kernel, kern_file_size);
>> >>
>> >> -       /* Everything is OK */
>> >> +       /* Everything is OK. */
>> >>         return 0;
>> >>  }
>> >> --
>> >> 2.37.4
>> >>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 19/26] x86/build: Cleanup tools/build.c
  2023-03-09 17:22         ` Evgeniy Baskov
@ 2023-03-09 17:37           ` Ard Biesheuvel
  0 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-09 17:37 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 9 Mar 2023 at 18:22, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-09 19:50, Ard Biesheuvel wrote:
> > On Thu, 9 Mar 2023 at 17:25, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> On 2023-03-09 18:57, Ard Biesheuvel wrote:
> >> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >> >>
> >> >> Use newer C standard. Since kernel requires C99 compiler now,
> >> >> we can make use of the new features to make the core more readable.
> >> >>
> >> >> Use mmap() for reading files also to make things simpler.
> >> >>
> >> >> Replace most magic numbers with defines.
> >> >>
> >> >> Should have no functional changes. This is done in preparation for the
> >> >> next changes that makes generated PE header more spec compliant.
> >> >>
> >> >> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> >> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >> >> ---
> >> >>  arch/x86/boot/tools/build.c | 387
> >> >> +++++++++++++++++++++++-------------
> >> >>  1 file changed, 245 insertions(+), 142 deletions(-)
> >> >>
> >> >> diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
> >> >> index bd247692b701..fbc5315af032 100644
> >> >> --- a/arch/x86/boot/tools/build.c
> >> >> +++ b/arch/x86/boot/tools/build.c
> >> >> @@ -25,20 +25,21 @@
> >> >>   * Substantially overhauled by H. Peter Anvin, April 2007
> >> >>   */
> >> >>
> >> >> +#include <fcntl.h>
> >> >> +#include <stdarg.h>
> >> >> +#include <stdint.h>
> >> >>  #include <stdio.h>
> >> >> -#include <string.h>
> >> >>  #include <stdlib.h>
> >> >> -#include <stdarg.h>
> >> >> -#include <sys/types.h>
> >> >> +#include <string.h>
> >> >> +#include <sys/mman.h>
> >> >>  #include <sys/stat.h>
> >> >> +#include <sys/types.h>
> >> >>  #include <unistd.h>
> >> >> -#include <fcntl.h>
> >> >> -#include <sys/mman.h>
> >> >> +
> >> >>  #include <tools/le_byteshift.h>
> >> >> +#include <linux/pe.h>
> >> >>
> >> >> -typedef unsigned char  u8;
> >> >> -typedef unsigned short u16;
> >> >> -typedef unsigned int   u32;
> >> >> +#define round_up(x, n) (((x) + (n) - 1) & ~((n) - 1))
> >> >>
> >> >>  #define DEFAULT_MAJOR_ROOT 0
> >> >>  #define DEFAULT_MINOR_ROOT 0
> >> >> @@ -48,8 +49,13 @@ typedef unsigned int   u32;
> >> >>  #define SETUP_SECT_MIN 5
> >> >>  #define SETUP_SECT_MAX 64
> >> >>
> >> >> +#define PARAGRAPH_SIZE 16
> >> >> +#define SECTOR_SIZE 512
> >> >> +#define FILE_ALIGNMENT 512
> >> >> +#define SECTION_ALIGNMENT 4096
> >> >> +
> >> >>  /* This must be large enough to hold the entire setup */
> >> >> -u8 buf[SETUP_SECT_MAX*512];
> >> >> +uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
> >> >>
> >> >>  #define PECOFF_RELOC_RESERVE 0x20
> >> >>
> >> >> @@ -59,6 +65,52 @@ u8 buf[SETUP_SECT_MAX*512];
> >> >>  #define PECOFF_COMPAT_RESERVE 0x0
> >> >>  #endif
> >> >>
> >> >> +#define RELOC_SECTION_SIZE 10
> >> >> +
> >> >> +/* PE header has different format depending on the architecture */
> >> >> +#ifdef CONFIG_X86_64
> >> >> +typedef struct pe32plus_opt_hdr pe_opt_hdr;
> >> >> +#else
> >> >> +typedef struct pe32_opt_hdr pe_opt_hdr;
> >> >> +#endif
> >> >> +
> >> >> +static inline struct pe_hdr *get_pe_header(uint8_t *buf)
> >> >> +{
> >> >> +       uint32_t pe_offset =
> >> >> get_unaligned_le32(buf+MZ_HEADER_PEADDR_OFFSET);
> >> >> +       return (struct pe_hdr *)(buf + pe_offset);
> >> >> +}
> >> >> +
> >> >> +static inline pe_opt_hdr *get_pe_opt_header(uint8_t *buf)
> >> >> +{
> >> >> +       return (pe_opt_hdr *)(get_pe_header(buf) + 1);
> >> >> +}
> >> >> +
> >> >> +static inline struct section_header *get_sections(uint8_t *buf)
> >> >> +{
> >> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> >> >> +       uint32_t n_data_dirs = get_unaligned_le32(&hdr->data_dirs);
> >> >> +       uint8_t *sections = (uint8_t *)(hdr + 1) +
> >> >> n_data_dirs*sizeof(struct data_dirent);
> >> >> +       return  (struct section_header *)sections;
> >> >> +}
> >> >> +
> >> >> +static inline struct data_directory *get_data_dirs(uint8_t *buf)
> >> >> +{
> >> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> >> >> +       return (struct data_directory *)(hdr + 1);
> >> >> +}
> >> >> +
> >> >> +#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
> >> >
> >> > Can we drop this conditional?
> >>
> >> Without CONFIG_EFI_DXE_MEM_ATTRIBUTES memory attributes are not
> >> getting applies anywhere, so this would break 'nokaslr' on UEFI
> >> implementations that honor section attributes.
> >>
> >
> > How so? This only affects the mappings that are created by UEFI for
> > the decompressor binary, right?
>
> I was thinking about the in-place decompression, but now I've realized
> that I was wrong since in-place decompression cannot happen when booting
> via the stub. I'll remove the ifdef.
>

Indeed. And I realized that all the image_offset handling can now be
dropped as well.

> >
> >> KASLR is already broken without that option on implementations
> >> that disallow execution of the free memory though. But unlike
> >> free memory, sections are more likely to get protected, I think.
> >>
> >
> > We need to allocate those pages properly in any case (see my other
> > reply) so it is no longer free memory.
>
> It should be fine, as I explained.
>
> The only thing that is a little unexpected is that the kernel might
> shift even with 'nokaslr' when the LOAD_PHYSICAL_ADDR is already taken
> by some firmware allocation (or by us). This should cause no real
> problems, since the kernel is required to be relocatable for the
> EFISTUB.
>

OK, good to know.

> >
> >> >> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE |
> >> >> IMAGE_SCN_ALIGN_4096BYTES)
> >> >> +#define SCN_RX (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_EXECUTE |
> >> >> IMAGE_SCN_ALIGN_4096BYTES)
> >> >> +#define SCN_RO (IMAGE_SCN_MEM_READ | IMAGE_SCN_ALIGN_4096BYTES)
> >> >
> >> > Please drop the alignment flags - they don't apply to executable only
> >> > object files.
> >>
> >> Got it, will remove them in v5.
> >>
> >> >
> >> >> +#else
> >> >> +/* With memory protection disabled all sections are RWX */
> >> >> +#define SCN_RW (IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_WRITE | \
> >> >> +               IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_ALIGN_4096BYTES)
> >> >> +#define SCN_RX SCN_RW
> >> >> +#define SCN_RO SCN_RW
> >> >> +#endif
> >> >> +
> >> >>  static unsigned long efi32_stub_entry;
> >> >>  static unsigned long efi64_stub_entry;
> >> >>  static unsigned long efi_pe_entry;
> >> >> @@ -70,7 +122,7 @@ static unsigned long _end;
> >> >>
> >> >>
> >> >> /*----------------------------------------------------------------------*/
> >> >>
> >> >> -static const u32 crctab32[] = {
> >> >> +static const uint32_t crctab32[] = {
> >> >
> >> > Replacing all the type names makes this patch very messy. Can we back
> >> > that out please?
> >>
> >> Ok, I will revert them.
> >>
> >> >
> >> >>         0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419,
> >> >>         0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4,
> >> >>         0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07,
> >> >> @@ -125,12 +177,12 @@ static const u32 crctab32[] = {
> >> >>         0x2d02ef8d
> >> >>  };
> >> >>
> >> >> -static u32 partial_crc32_one(u8 c, u32 crc)
> >> >> +static uint32_t partial_crc32_one(uint8_t c, uint32_t crc)
> >> >>  {
> >> >>         return crctab32[(crc ^ c) & 0xff] ^ (crc >> 8);
> >> >>  }
> >> >>
> >> >> -static u32 partial_crc32(const u8 *s, int len, u32 crc)
> >> >> +static uint32_t partial_crc32(const uint8_t *s, int len, uint32_t
> >> >> crc)
> >> >>  {
> >> >>         while (len--)
> >> >>                 crc = partial_crc32_one(*s++, crc);
> >> >> @@ -152,57 +204,106 @@ static void usage(void)
> >> >>         die("Usage: build setup system zoffset.h image");
> >> >>  }
> >> >>
> >> >> +static void *map_file(const char *path, size_t *psize)
> >> >> +{
> >> >> +       struct stat statbuf;
> >> >> +       size_t size;
> >> >> +       void *addr;
> >> >> +       int fd;
> >> >> +
> >> >> +       fd = open(path, O_RDONLY);
> >> >> +       if (fd < 0)
> >> >> +               die("Unable to open `%s': %m", path);
> >> >> +       if (fstat(fd, &statbuf))
> >> >> +               die("Unable to stat `%s': %m", path);
> >> >> +
> >> >> +       size = statbuf.st_size;
> >> >> +       /*
> >> >> +        * Map one byte more, to allow adding null-terminator
> >> >> +        * for text files.
> >> >> +        */
> >> >> +       addr = mmap(NULL, size + 1, PROT_READ | PROT_WRITE,
> >> >> MAP_PRIVATE, fd, 0);
> >> >> +       if (addr == MAP_FAILED)
> >> >> +               die("Unable to mmap '%s': %m", path);
> >> >> +
> >> >> +       close(fd);
> >> >> +
> >> >> +       *psize = size;
> >> >> +       return addr;
> >> >> +}
> >> >> +
> >> >> +static void unmap_file(void *addr, size_t size)
> >> >> +{
> >> >> +       munmap(addr, size + 1);
> >> >> +}
> >> >> +
> >> >> +static void *map_output_file(const char *path, size_t size)
> >> >> +{
> >> >> +       void *addr;
> >> >> +       int fd;
> >> >> +
> >> >> +       fd = open(path, O_RDWR | O_CREAT, 0660);
> >> >> +       if (fd < 0)
> >> >> +               die("Unable to create `%s': %m", path);
> >> >> +
> >> >> +       if (ftruncate(fd, size))
> >> >> +               die("Unable to resize `%s': %m", path);
> >> >> +
> >> >> +       addr = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED,
> >> >> fd, 0);
> >> >> +       if (addr == MAP_FAILED)
> >> >> +               die("Unable to mmap '%s': %m", path);
> >> >> +
> >> >> +       return addr;
> >> >> +}
> >> >> +
> >> >>  #ifdef CONFIG_EFI_STUB
> >> >>
> >> >> -static void update_pecoff_section_header_fields(char *section_name,
> >> >> u32 vma, u32 size, u32 datasz, u32 offset)
> >> >> +static void update_pecoff_section_header_fields(char *section_name,
> >> >> uint32_t vma,
> >> >> +                                               uint32_t size,
> >> >> uint32_t datasz,
> >> >> +                                               uint32_t offset)
> >> >>  {
> >> >>         unsigned int pe_header;
> >> >>         unsigned short num_sections;
> >> >> -       u8 *section;
> >> >> +       struct section_header *section;
> >> >>
> >> >> -       pe_header = get_unaligned_le32(&buf[0x3c]);
> >> >> -       num_sections = get_unaligned_le16(&buf[pe_header + 6]);
> >> >> -
> >> >> -#ifdef CONFIG_X86_32
> >> >> -       section = &buf[pe_header + 0xa8];
> >> >> -#else
> >> >> -       section = &buf[pe_header + 0xb8];
> >> >> -#endif
> >> >> +       struct pe_hdr *hdr = get_pe_header(buf);
> >> >> +       num_sections = get_unaligned_le16(&hdr->sections);
> >> >> +       section = get_sections(buf);
> >> >>
> >> >>         while (num_sections > 0) {
> >> >> -               if (strncmp((char*)section, section_name, 8) == 0) {
> >> >> +               if (strncmp(section->name, section_name, 8) == 0) {
> >> >>                         /* section header size field */
> >> >> -                       put_unaligned_le32(size, section + 0x8);
> >> >> +                       put_unaligned_le32(size,
> >> >> &section->virtual_size);
> >> >>
> >> >>                         /* section header vma field */
> >> >> -                       put_unaligned_le32(vma, section + 0xc);
> >> >> +                       put_unaligned_le32(vma,
> >> >> &section->virtual_address);
> >> >>
> >> >>                         /* section header 'size of initialised data'
> >> >> field */
> >> >> -                       put_unaligned_le32(datasz, section + 0x10);
> >> >> +                       put_unaligned_le32(datasz,
> >> >> &section->raw_data_size);
> >> >>
> >> >>                         /* section header 'file offset' field */
> >> >> -                       put_unaligned_le32(offset, section + 0x14);
> >> >> +                       put_unaligned_le32(offset,
> >> >> &section->data_addr);
> >> >>
> >> >>                         break;
> >> >>                 }
> >> >> -               section += 0x28;
> >> >> +               section++;
> >> >>                 num_sections--;
> >> >>         }
> >> >>  }
> >> >>
> >> >> -static void update_pecoff_section_header(char *section_name, u32
> >> >> offset, u32 size)
> >> >> +static void update_pecoff_section_header(char *section_name, uint32_t
> >> >> offset, uint32_t size)
> >> >>  {
> >> >>         update_pecoff_section_header_fields(section_name, offset,
> >> >> size, size, offset);
> >> >>  }
> >> >>
> >> >>  static void update_pecoff_setup_and_reloc(unsigned int size)
> >> >>  {
> >> >> -       u32 setup_offset = 0x200;
> >> >> -       u32 reloc_offset = size - PECOFF_RELOC_RESERVE -
> >> >> PECOFF_COMPAT_RESERVE;
> >> >> +       uint32_t setup_offset = SECTOR_SIZE;
> >> >> +       uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE -
> >> >> PECOFF_COMPAT_RESERVE;
> >> >>  #ifdef CONFIG_EFI_MIXED
> >> >> -       u32 compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
> >> >> +       uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
> >> >>  #endif
> >> >> -       u32 setup_size = reloc_offset - setup_offset;
> >> >> +       uint32_t setup_size = reloc_offset - setup_offset;
> >> >>
> >> >>         update_pecoff_section_header(".setup", setup_offset,
> >> >> setup_size);
> >> >>         update_pecoff_section_header(".reloc", reloc_offset,
> >> >> PECOFF_RELOC_RESERVE);
> >> >> @@ -211,8 +312,8 @@ static void update_pecoff_setup_and_reloc(unsigned
> >> >> int size)
> >> >>          * Modify .reloc section contents with a single entry. The
> >> >>          * relocation is applied to offset 10 of the relocation
> >> >> section.
> >> >>          */
> >> >> -       put_unaligned_le32(reloc_offset + 10, &buf[reloc_offset]);
> >> >> -       put_unaligned_le32(10, &buf[reloc_offset + 4]);
> >> >> +       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE,
> >> >> &buf[reloc_offset]);
> >> >> +       put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset +
> >> >> 4]);
> >> >>
> >> >>  #ifdef CONFIG_EFI_MIXED
> >> >>         update_pecoff_section_header(".compat", compat_offset,
> >> >> PECOFF_COMPAT_RESERVE);
> >> >> @@ -224,19 +325,17 @@ static void
> >> >> update_pecoff_setup_and_reloc(unsigned int size)
> >> >>          */
> >> >>         buf[compat_offset] = 0x1;
> >> >>         buf[compat_offset + 1] = 0x8;
> >> >> -       put_unaligned_le16(0x14c, &buf[compat_offset + 2]);
> >> >> +       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset
> >> >> + 2]);
> >> >>         put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset +
> >> >> 4]);
> >> >>  #endif
> >> >>  }
> >> >>
> >> >> -static void update_pecoff_text(unsigned int text_start, unsigned int
> >> >> file_sz,
> >> >> +static unsigned int update_pecoff_sections(unsigned int text_start,
> >> >> unsigned int text_sz,
> >> >>                                unsigned int init_sz)
> >> >>  {
> >> >> -       unsigned int pe_header;
> >> >> -       unsigned int text_sz = file_sz - text_start;
> >> >> +       unsigned int file_sz = text_start + text_sz;
> >> >>         unsigned int bss_sz = init_sz - file_sz;
> >> >> -
> >> >> -       pe_header = get_unaligned_le32(&buf[0x3c]);
> >> >> +       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> >> >>
> >> >>         /*
> >> >>          * The PE/COFF loader may load the image at an address which
> >> >> is
> >> >> @@ -254,18 +353,20 @@ static void update_pecoff_text(unsigned int
> >> >> text_start, unsigned int file_sz,
> >> >>          * Size of code: Subtract the size of the first sector (512
> >> >> bytes)
> >> >>          * which includes the header.
> >> >>          */
> >> >> -       put_unaligned_le32(file_sz - 512 + bss_sz, &buf[pe_header +
> >> >> 0x1c]);
> >> >> +       put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz,
> >> >> &hdr->text_size);
> >> >>
> >> >>         /* Size of image */
> >> >> -       put_unaligned_le32(init_sz, &buf[pe_header + 0x50]);
> >> >> +       put_unaligned_le32(init_sz, &hdr->image_size);
> >> >>
> >> >>         /*
> >> >>          * Address of entry point for PE/COFF executable
> >> >>          */
> >> >> -       put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header +
> >> >> 0x28]);
> >> >> +       put_unaligned_le32(text_start + efi_pe_entry,
> >> >> &hdr->entry_point);
> >> >>
> >> >>         update_pecoff_section_header_fields(".text", text_start,
> >> >> text_sz + bss_sz,
> >> >>                                             text_sz, text_start);
> >> >> +
> >> >> +       return text_start + file_sz;
> >> >>  }
> >> >>
> >> >>  static int reserve_pecoff_reloc_section(int c)
> >> >> @@ -275,7 +376,7 @@ static int reserve_pecoff_reloc_section(int c)
> >> >>         return PECOFF_RELOC_RESERVE;
> >> >>  }
> >> >>
> >> >> -static void efi_stub_defaults(void)
> >> >> +static void efi_stub_update_defaults(void)
> >> >>  {
> >> >>         /* Defaults for old kernel */
> >> >>  #ifdef CONFIG_X86_32
> >> >> @@ -298,7 +399,7 @@ static void efi_stub_entry_update(void)
> >> >>
> >> >>  #ifdef CONFIG_EFI_MIXED
> >> >>         if (efi32_stub_entry != addr)
> >> >> -               die("32-bit and 64-bit EFI entry points do not
> >> >> match\n");
> >> >> +               die("32-bit and 64-bit EFI entry points do not
> >> >> match");
> >> >>  #endif
> >> >>  #endif
> >> >>         put_unaligned_le32(addr, &buf[0x264]);
> >> >> @@ -310,7 +411,7 @@ static inline void
> >> >> update_pecoff_setup_and_reloc(unsigned int size) {}
> >> >>  static inline void update_pecoff_text(unsigned int text_start,
> >> >>                                       unsigned int file_sz,
> >> >>                                       unsigned int init_sz) {}
> >> >> -static inline void efi_stub_defaults(void) {}
> >> >> +static inline void efi_stub_update_defaults(void) {}
> >> >>  static inline void efi_stub_entry_update(void) {}
> >> >>
> >> >>  static inline int reserve_pecoff_reloc_section(int c)
> >> >> @@ -338,20 +439,15 @@ static int reserve_pecoff_compat_section(int c)
> >> >>
> >> >>  static void parse_zoffset(char *fname)
> >> >>  {
> >> >> -       FILE *file;
> >> >> -       char *p;
> >> >> -       int c;
> >> >> +       size_t size;
> >> >> +       char *data, *p;
> >> >>
> >> >> -       file = fopen(fname, "r");
> >> >> -       if (!file)
> >> >> -               die("Unable to open `%s': %m", fname);
> >> >> -       c = fread(buf, 1, sizeof(buf) - 1, file);
> >> >> -       if (ferror(file))
> >> >> -               die("read-error on `zoffset.h'");
> >> >> -       fclose(file);
> >> >> -       buf[c] = 0;
> >> >> +       data = map_file(fname, &size);
> >> >>
> >> >> -       p = (char *)buf;
> >> >> +       /* We can do that, since we mapped one byte more */
> >> >> +       data[size] = 0;
> >> >> +
> >> >> +       p = (char *)data;
> >> >>
> >> >>         while (p && *p) {
> >> >>                 PARSE_ZOFS(p, efi32_stub_entry);
> >> >> @@ -367,82 +463,99 @@ static void parse_zoffset(char *fname)
> >> >>                 while (p && (*p == '\r' || *p == '\n'))
> >> >>                         p++;
> >> >>         }
> >> >> +
> >> >> +       unmap_file(data, size);
> >> >>  }
> >> >>
> >> >> -int main(int argc, char ** argv)
> >> >> +static unsigned int read_setup(char *path)
> >> >>  {
> >> >> -       unsigned int i, sz, setup_sectors, init_sz;
> >> >> -       int c;
> >> >> -       u32 sys_size;
> >> >> -       struct stat sb;
> >> >> -       FILE *file, *dest;
> >> >> -       int fd;
> >> >> -       void *kernel;
> >> >> -       u32 crc = 0xffffffffUL;
> >> >> -
> >> >> -       efi_stub_defaults();
> >> >> -
> >> >> -       if (argc != 5)
> >> >> -               usage();
> >> >> -       parse_zoffset(argv[3]);
> >> >> -
> >> >> -       dest = fopen(argv[4], "w");
> >> >> -       if (!dest)
> >> >> -               die("Unable to write `%s': %m", argv[4]);
> >> >> +       FILE *file;
> >> >> +       unsigned int setup_size, file_size;
> >> >>
> >> >>         /* Copy the setup code */
> >> >> -       file = fopen(argv[1], "r");
> >> >> +       file = fopen(path, "r");
> >> >>         if (!file)
> >> >> -               die("Unable to open `%s': %m", argv[1]);
> >> >> -       c = fread(buf, 1, sizeof(buf), file);
> >> >> +               die("Unable to open `%s': %m", path);
> >> >> +
> >> >> +       file_size = fread(buf, 1, sizeof(buf), file);
> >> >>         if (ferror(file))
> >> >>                 die("read-error on `setup'");
> >> >> -       if (c < 1024)
> >> >> +
> >> >> +       if (file_size < 2 * SECTOR_SIZE)
> >> >>                 die("The setup must be at least 1024 bytes");
> >> >> -       if (get_unaligned_le16(&buf[510]) != 0xAA55)
> >> >> +
> >> >> +       if (get_unaligned_le16(&buf[SECTOR_SIZE - 2]) != 0xAA55)
> >> >>                 die("Boot block hasn't got boot flag (0xAA55)");
> >> >> +
> >> >>         fclose(file);
> >> >>
> >> >> -       c += reserve_pecoff_compat_section(c);
> >> >> -       c += reserve_pecoff_reloc_section(c);
> >> >> +       /* Reserve space for PE sections */
> >> >> +       file_size += reserve_pecoff_compat_section(file_size);
> >> >> +       file_size += reserve_pecoff_reloc_section(file_size);
> >> >>
> >> >>         /* Pad unused space with zeros */
> >> >> -       setup_sectors = (c + 511) / 512;
> >> >> -       if (setup_sectors < SETUP_SECT_MIN)
> >> >> -               setup_sectors = SETUP_SECT_MIN;
> >> >> -       i = setup_sectors*512;
> >> >> -       memset(buf+c, 0, i-c);
> >> >>
> >> >> -       update_pecoff_setup_and_reloc(i);
> >> >> +       setup_size = round_up(file_size, SECTOR_SIZE);
> >> >> +
> >> >> +       if (setup_size < SETUP_SECT_MIN * SECTOR_SIZE)
> >> >> +               setup_size = SETUP_SECT_MIN * SECTOR_SIZE;
> >> >> +
> >> >> +       /*
> >> >> +        * Global buffer is already initialised
> >> >> +        * to 0, but just in case, zero out padding.
> >> >> +        */
> >> >> +
> >> >> +       memset(buf + file_size, 0, setup_size - file_size);
> >> >> +
> >> >> +       return setup_size;
> >> >> +}
> >> >> +
> >> >> +int main(int argc, char **argv)
> >> >> +{
> >> >> +       size_t kern_file_size;
> >> >> +       unsigned int setup_size;
> >> >> +       unsigned int setup_sectors;
> >> >> +       unsigned int init_size;
> >> >> +       unsigned int total_size;
> >> >> +       unsigned int kern_size;
> >> >> +       void *kernel;
> >> >> +       uint32_t crc = 0xffffffffUL;
> >> >> +       uint8_t *output;
> >> >> +
> >> >> +       if (argc != 5)
> >> >> +               usage();
> >> >> +
> >> >> +       efi_stub_update_defaults();
> >> >> +       parse_zoffset(argv[3]);
> >> >> +
> >> >> +       setup_size = read_setup(argv[1]);
> >> >> +
> >> >> +       setup_sectors = setup_size/SECTOR_SIZE;
> >> >>
> >> >>         /* Set the default root device */
> >> >>         put_unaligned_le16(DEFAULT_ROOT_DEV, &buf[508]);
> >> >>
> >> >> -       /* Open and stat the kernel file */
> >> >> -       fd = open(argv[2], O_RDONLY);
> >> >> -       if (fd < 0)
> >> >> -               die("Unable to open `%s': %m", argv[2]);
> >> >> -       if (fstat(fd, &sb))
> >> >> -               die("Unable to stat `%s': %m", argv[2]);
> >> >> -       sz = sb.st_size;
> >> >> -       kernel = mmap(NULL, sz, PROT_READ, MAP_SHARED, fd, 0);
> >> >> -       if (kernel == MAP_FAILED)
> >> >> -               die("Unable to mmap '%s': %m", argv[2]);
> >> >> -       /* Number of 16-byte paragraphs, including space for a 4-byte
> >> >> CRC */
> >> >> -       sys_size = (sz + 15 + 4) / 16;
> >> >> +       /* Map kernel file to memory */
> >> >> +       kernel = map_file(argv[2], &kern_file_size);
> >> >> +
> >> >>  #ifdef CONFIG_EFI_STUB
> >> >> -       /*
> >> >> -        * COFF requires minimum 32-byte alignment of sections, and
> >> >> -        * adding a signature is problematic without that alignment.
> >> >> -        */
> >> >> -       sys_size = (sys_size + 1) & ~1;
> >> >> +       /* PE specification require 512-byte minimum section file
> >> >> alignment */
> >> >> +       kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
> >> >> +       update_pecoff_setup_and_reloc(setup_size);
> >> >> +#else
> >> >> +       /* Number of 16-byte paragraphs, including space for a 4-byte
> >> >> CRC */
> >> >> +       kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
> >> >>  #endif
> >> >>
> >> >>         /* Patch the setup code with the appropriate size parameters
> >> >> */
> >> >> -       buf[0x1f1] = setup_sectors-1;
> >> >> -       put_unaligned_le32(sys_size, &buf[0x1f4]);
> >> >> +       buf[0x1f1] = setup_sectors - 1;
> >> >> +       put_unaligned_le32(kern_size/PARAGRAPH_SIZE, &buf[0x1f4]);
> >> >> +
> >> >> +       /* Update kernel_info offset. */
> >> >> +       put_unaligned_le32(kernel_info, &buf[0x268]);
> >> >> +
> >> >> +       init_size = get_unaligned_le32(&buf[0x260]);
> >> >>
> >> >> -       init_sz = get_unaligned_le32(&buf[0x260]);
> >> >>  #ifdef CONFIG_EFI_STUB
> >> >>         /*
> >> >>          * The decompression buffer will start at ImageBase. When
> >> >> relocating
> >> >> @@ -458,45 +571,35 @@ int main(int argc, char ** argv)
> >> >>          * For future-proofing, increase init_sz if necessary.
> >> >>          */
> >> >>
> >> >> -       if (init_sz - _end < i + _ehead) {
> >> >> -               init_sz = (i + _ehead + _end + 4095) & ~4095;
> >> >> -               put_unaligned_le32(init_sz, &buf[0x260]);
> >> >> +       if (init_size - _end < setup_size + _ehead) {
> >> >> +               init_size = round_up(setup_size + _ehead + _end,
> >> >> SECTION_ALIGNMENT);
> >> >> +               put_unaligned_le32(init_size, &buf[0x260]);
> >> >>         }
> >> >> -#endif
> >> >> -       update_pecoff_text(setup_sectors * 512, i + (sys_size * 16),
> >> >> init_sz);
> >> >>
> >> >> -       efi_stub_entry_update();
> >> >> -
> >> >> -       /* Update kernel_info offset. */
> >> >> -       put_unaligned_le32(kernel_info, &buf[0x268]);
> >> >> +       total_size = update_pecoff_sections(setup_size, kern_size,
> >> >> init_size);
> >> >>
> >> >> -       crc = partial_crc32(buf, i, crc);
> >> >> -       if (fwrite(buf, 1, i, dest) != i)
> >> >> -               die("Writing setup failed");
> >> >> +       efi_stub_entry_update();
> >> >> +#else
> >> >> +       (void)init_size;
> >> >> +       total_size = setup_size + kern_size;
> >> >> +#endif
> >> >>
> >> >> -       /* Copy the kernel code */
> >> >> -       crc = partial_crc32(kernel, sz, crc);
> >> >> -       if (fwrite(kernel, 1, sz, dest) != sz)
> >> >> -               die("Writing kernel failed");
> >> >> +       output = map_output_file(argv[4], total_size);
> >> >>
> >> >> -       /* Add padding leaving 4 bytes for the checksum */
> >> >> -       while (sz++ < (sys_size*16) - 4) {
> >> >> -               crc = partial_crc32_one('\0', crc);
> >> >> -               if (fwrite("\0", 1, 1, dest) != 1)
> >> >> -                       die("Writing padding failed");
> >> >> -       }
> >> >> +       memcpy(output, buf, setup_size);
> >> >> +       memcpy(output + setup_size, kernel, kern_file_size);
> >> >> +       memset(output + setup_size + kern_file_size, 0, kern_size -
> >> >> kern_file_size);
> >> >>
> >> >> -       /* Write the CRC */
> >> >> -       put_unaligned_le32(crc, buf);
> >> >> -       if (fwrite(buf, 1, 4, dest) != 4)
> >> >> -               die("Writing CRC failed");
> >> >> +       /* Calculate and write kernel checksum. */
> >> >> +       crc = partial_crc32(output, total_size - 4, crc);
> >> >> +       put_unaligned_le32(crc, &output[total_size - 4]);
> >> >>
> >> >> -       /* Catch any delayed write failures */
> >> >> -       if (fclose(dest))
> >> >> -               die("Writing image failed");
> >> >> +       /* Catch any delayed write failures. */
> >> >> +       if (munmap(output, total_size) < 0)
> >> >> +               die("Writing kernel failed");
> >> >>
> >> >> -       close(fd);
> >> >> +       unmap_file(kernel, kern_file_size);
> >> >>
> >> >> -       /* Everything is OK */
> >> >> +       /* Everything is OK. */
> >> >>         return 0;
> >> >>  }
> >> >> --
> >> >> 2.37.4
> >> >>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size
  2022-12-15 12:37 ` [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size Evgeniy Baskov
@ 2023-03-10 14:43   ` Ard Biesheuvel
  2023-03-11 14:30     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 14:43 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> To protect sections on page table level each section
> needs to be aligned on page size (4KB).
>
> Set sections alignment in linker script.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/boot/compressed/vmlinux.lds.S | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/arch/x86/boot/compressed/vmlinux.lds.S b/arch/x86/boot/compressed/vmlinux.lds.S
> index 112b2375d021..6be90f1a1198 100644
> --- a/arch/x86/boot/compressed/vmlinux.lds.S
> +++ b/arch/x86/boot/compressed/vmlinux.lds.S
> @@ -27,21 +27,27 @@ SECTIONS
>                 HEAD_TEXT
>                 _ehead = . ;
>         }
> +       . = ALIGN(PAGE_SIZE);
>         .rodata..compressed : {
> +               _compressed = .;
>                 *(.rodata..compressed)

Can you just move this bit into the rodata section below?

> +               _ecompressed = .;
>         }
> +       . = ALIGN(PAGE_SIZE);
>         .text : {

Please use

.text : ALIGN(PAGE_SIZE) {

which marks the section as being page aligned, rather than just being
placed on a 4k boundary.

>                 _text = .;      /* Text */
>                 *(.text)
>                 *(.text.*)
>                 _etext = . ;
>         }
> +       . = ALIGN(PAGE_SIZE);
>         .rodata : {
>                 _rodata = . ;
>                 *(.rodata)       /* read-only data */
>                 *(.rodata.*)
>                 _erodata = . ;
>         }
> +       . = ALIGN(PAGE_SIZE);
>         .data : {
>                 _data = . ;
>                 *(.data)
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 02/26] x86/build: Remove RWX sections and align on 4KB
  2022-12-15 12:37 ` [PATCH v4 02/26] x86/build: Remove RWX sections and align on 4KB Evgeniy Baskov
@ 2023-03-10 14:45   ` Ard Biesheuvel
  2023-03-11 14:31     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 14:45 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Avoid creating sections simultaneously writable and readable
> to prepare for W^X implementation. Align sections on page size (4KB) to
> allow protecting them in the page tables.
>
> Split init code form ".init" segment into separate R_X ".inittext"
> segment and make ".init" segment non-executable.
>
> Also add these segments to x86_32 architecture for consistency.
> Currently paging is disabled in x86_32 in compressed kernel, so
> protection is not applied anyways, but .init code was incorrectly
> placed in non-executable ".data" segment. This should not change
> anything meaningful in memory layout now, but might be required in case
> memory protection will also be implemented in compressed kernel for
> x86_32.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>

One nit: the series modifies both the decompressor linker script and
the core kernel one, so please make it very explicit in the commit log
which one is being modified, and why it matters for this particular
context.


> ---
>  arch/x86/kernel/vmlinux.lds.S | 15 ++++++++-------
>  1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
> index 2e0ee14229bf..2e56d694c491 100644
> --- a/arch/x86/kernel/vmlinux.lds.S
> +++ b/arch/x86/kernel/vmlinux.lds.S
> @@ -102,12 +102,11 @@ jiffies = jiffies_64;
>  PHDRS {
>         text PT_LOAD FLAGS(5);          /* R_E */
>         data PT_LOAD FLAGS(6);          /* RW_ */
> -#ifdef CONFIG_X86_64
> -#ifdef CONFIG_SMP
> +#if defined(CONFIG_X86_64) && defined(CONFIG_SMP)
>         percpu PT_LOAD FLAGS(6);        /* RW_ */
>  #endif
> -       init PT_LOAD FLAGS(7);          /* RWE */
> -#endif
> +       inittext PT_LOAD FLAGS(5);      /* R_E */
> +       init PT_LOAD FLAGS(6);          /* RW_ */
>         note PT_NOTE FLAGS(0);          /* ___ */
>  }
>
> @@ -227,9 +226,10 @@ SECTIONS
>  #endif
>
>         INIT_TEXT_SECTION(PAGE_SIZE)
> -#ifdef CONFIG_X86_64
> -       :init
> -#endif
> +       :inittext
> +
> +       . = ALIGN(PAGE_SIZE);
> +
>
>         /*
>          * Section for code used exclusively before alternatives are run. All
> @@ -241,6 +241,7 @@ SECTIONS
>         .altinstr_aux : AT(ADDR(.altinstr_aux) - LOAD_OFFSET) {
>                 *(.altinstr_aux)
>         }
> +       :init
>
>         INIT_DATA_SECTION(16)
>
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 03/26] x86/boot: Set cr0 to known state in trampoline
  2022-12-15 12:37 ` [PATCH v4 03/26] x86/boot: Set cr0 to known state in trampoline Evgeniy Baskov
@ 2023-03-10 14:48   ` Ard Biesheuvel
  0 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 14:48 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Ensure WP bit to be set to prevent boot code from writing to
> non-writable memory pages.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

Acked-by: Ard Biesheuvel <ardb@kernel.org>

> ---
>  arch/x86/boot/compressed/head_64.S | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
> index a75712991df3..9f2e8f50fc71 100644
> --- a/arch/x86/boot/compressed/head_64.S
> +++ b/arch/x86/boot/compressed/head_64.S
> @@ -660,9 +660,8 @@ SYM_CODE_START(trampoline_32bit_src)
>         pushl   $__KERNEL_CS
>         pushl   %eax
>
> -       /* Enable paging again. */
> -       movl    %cr0, %eax
> -       btsl    $X86_CR0_PG_BIT, %eax
> +       /* Enable paging and set CR0 to known state (this also sets WP flag) */
> +       movl    $CR0_STATE, %eax
>         movl    %eax, %cr0
>
>         lret
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 09/26] x86/boot: Remove mapping from page fault handler
  2022-12-15 12:38 ` [PATCH v4 09/26] x86/boot: Remove mapping from page fault handler Evgeniy Baskov
@ 2023-03-10 14:49   ` Ard Biesheuvel
  0 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 14:49 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> After every implicit mapping is removed, this code is no longer needed.
>
> Remove memory mapping from page fault handler to ensure that there are
> no hidden invalid memory accesses.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>

> ---
>  arch/x86/boot/compressed/ident_map_64.c | 26 ++++++++++---------------
>  1 file changed, 10 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
> index fec795a4ce23..ba5108c58a4e 100644
> --- a/arch/x86/boot/compressed/ident_map_64.c
> +++ b/arch/x86/boot/compressed/ident_map_64.c
> @@ -386,27 +386,21 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code)
>  {
>         unsigned long address = native_read_cr2();
>         unsigned long end;
> -       bool ghcb_fault;
> +       char *msg;
>
> -       ghcb_fault = sev_es_check_ghcb_fault(address);
> +       if (sev_es_check_ghcb_fault(address))
> +               msg = "Page-fault on GHCB page:";
> +       else
> +               msg = "Unexpected page-fault:";
>
>         address   &= PMD_MASK;
>         end        = address + PMD_SIZE;
>
>         /*
> -        * Check for unexpected error codes. Unexpected are:
> -        *      - Faults on present pages
> -        *      - User faults
> -        *      - Reserved bits set
> -        */
> -       if (error_code & (X86_PF_PROT | X86_PF_USER | X86_PF_RSVD))
> -               do_pf_error("Unexpected page-fault:", error_code, address, regs->ip);
> -       else if (ghcb_fault)
> -               do_pf_error("Page-fault on GHCB page:", error_code, address, regs->ip);
> -
> -       /*
> -        * Error code is sane - now identity map the 2M region around
> -        * the faulting address.
> +        * Since all memory allocations are made explicit
> +        * now, every page fault at this stage is an
> +        * error and the error handler is there only
> +        * for debug purposes.
>          */
> -       kernel_add_identity_map(address, end, MAP_WRITE);
> +       do_pf_error(msg, error_code, address, regs->ip);
>  }
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 12/26] x86/boot: Make kernel_add_identity_map() a pointer
  2022-12-15 12:38 ` [PATCH v4 12/26] x86/boot: Make kernel_add_identity_map() a pointer Evgeniy Baskov
@ 2023-03-10 14:52   ` Ard Biesheuvel
  2023-03-11 14:34     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 14:52 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Convert kernel_add_identity_map() into a function pointer to be able
> to provide alternative implementations of this function. Required
> to enable calling the code using this function from EFI environment.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/boot/compressed/ident_map_64.c |  7 ++++---
>  arch/x86/boot/compressed/misc.c         | 24 ++++++++++++++++++++++++
>  arch/x86/boot/compressed/misc.h         | 15 +++------------
>  3 files changed, 31 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
> index ba5108c58a4e..1aee524d3c2b 100644
> --- a/arch/x86/boot/compressed/ident_map_64.c
> +++ b/arch/x86/boot/compressed/ident_map_64.c
> @@ -92,9 +92,9 @@ bool has_nx; /* set in head_64.S */
>  /*
>   * Adds the specified range to the identity mappings.
>   */
> -unsigned long kernel_add_identity_map(unsigned long start,
> -                                     unsigned long end,
> -                                     unsigned int flags)
> +unsigned long kernel_add_identity_map_(unsigned long start,

Please use a more discriminating name here - the trailing _ is rather
hard to spot.

> +                                      unsigned long end,
> +                                      unsigned int flags)
>  {
>         int ret;
>
> @@ -142,6 +142,7 @@ void initialize_identity_maps(void *rmode)
>         struct setup_data *sd;
>
>         boot_params = rmode;
> +       kernel_add_identity_map = kernel_add_identity_map_;
>
>         /* Exclude the encryption mask from __PHYSICAL_MASK */
>         physical_mask &= ~sme_me_mask;
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index aa4a22bc9cf9..c9c235d65d16 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -275,6 +275,22 @@ static void parse_elf(void *output, unsigned long output_len,
>         free(phdrs);
>  }
>
> +/*
> + * This points to actual implementation of mapping function
> + * for current environment: either EFI API wrapper,
> + * own implementation or dummy implementation below.
> + */
> +unsigned long (*kernel_add_identity_map)(unsigned long start,
> +                                        unsigned long end,
> +                                        unsigned int flags);
> +
> +static inline unsigned long kernel_add_identity_map_dummy(unsigned long start,

This function is never called, it only has its address taken, so the
'inline' makes no sense here.

> +                                                         unsigned long end,
> +                                                         unsigned int flags)
> +{
> +       return start;
> +}
> +
>  /*
>   * The compressed kernel image (ZO), has been moved so that its position
>   * is against the end of the buffer used to hold the uncompressed kernel
> @@ -312,6 +328,14 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
>
>         init_default_io_ops();
>
> +       /*
> +        * On 64-bit this pointer is set during page table uninitialization,

initialization

> +        * but on 32-bit it remains uninitialized, since paging is disabled.
> +        */
> +       if (IS_ENABLED(CONFIG_X86_32))
> +               kernel_add_identity_map = kernel_add_identity_map_dummy;
> +
> +
>         /*
>          * Detect TDX guest environment.
>          *
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 38d31bec062d..0076b2845b4b 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -180,18 +180,9 @@ static inline int count_immovable_mem_regions(void) { return 0; }
>  #ifdef CONFIG_X86_5LEVEL
>  extern unsigned int __pgtable_l5_enabled, pgdir_shift, ptrs_per_p4d;
>  #endif
> -#ifdef CONFIG_X86_64
> -extern unsigned long kernel_add_identity_map(unsigned long start,
> -                                            unsigned long end,
> -                                            unsigned int flags);
> -#else
> -static inline unsigned long kernel_add_identity_map(unsigned long start,
> -                                                   unsigned long end,
> -                                                   unsigned int flags)
> -{
> -       return start;
> -}
> -#endif
> +extern unsigned long (*kernel_add_identity_map)(unsigned long start,
> +                                               unsigned long end,
> +                                               unsigned int flags);
>  /* Used by PAGE_KERN* macros: */
>  extern pteval_t __default_kernel_pte_mask;
>
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 13/26] x86/boot: Split trampoline and pt init code
  2022-12-15 12:38 ` [PATCH v4 13/26] x86/boot: Split trampoline and pt init code Evgeniy Baskov
@ 2023-03-10 14:56   ` Ard Biesheuvel
  2023-03-11 14:37     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 14:56 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> When allocating trampoline from libstub trampoline allocation is
> performed separately, so it needs to be skipped.
>
> Split trampoline initialization and allocation code into two
> functions to make them invokable separately.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> ---
>  arch/x86/boot/compressed/pgtable_64.c | 73 +++++++++++++++++----------
>  1 file changed, 46 insertions(+), 27 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
> index c7cf5a1059a8..1f7169248612 100644
> --- a/arch/x86/boot/compressed/pgtable_64.c
> +++ b/arch/x86/boot/compressed/pgtable_64.c
> @@ -106,12 +106,8 @@ static unsigned long find_trampoline_placement(void)
>         return bios_start - TRAMPOLINE_32BIT_SIZE;
>  }
>
> -struct paging_config paging_prepare(void *rmode)
> +bool trampoline_pgtable_init(struct boot_params *boot_params)
>  {
> -       struct paging_config paging_config = {};
> -
> -       /* Initialize boot_params. Required for cmdline_find_option_bool(). */
> -       boot_params = rmode;
>
>         /*
>          * Check if LA57 is desired and supported.
> @@ -125,26 +121,10 @@ struct paging_config paging_prepare(void *rmode)
>          *
>          * That's substitute for boot_cpu_has() in early boot code.
>          */
> -       if (IS_ENABLED(CONFIG_X86_5LEVEL) &&
> -                       !cmdline_find_option_bool("no5lvl") &&
> -                       native_cpuid_eax(0) >= 7 &&
> -                       (native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31)))) {
> -               paging_config.l5_required = 1;
> -       }
> -
> -       paging_config.trampoline_start = find_trampoline_placement();
> -
> -       trampoline_32bit = (unsigned long *)paging_config.trampoline_start;
> -
> -       /* Preserve trampoline memory */
> -       memcpy(trampoline_save, trampoline_32bit, TRAMPOLINE_32BIT_SIZE);
> -
> -       /* Clear trampoline memory first */
> -       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
> -
> -       /* Copy trampoline code in place */
> -       memcpy(trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET / sizeof(unsigned long),
> -                       &trampoline_32bit_src, TRAMPOLINE_32BIT_CODE_SIZE);
> +       bool l5_required = IS_ENABLED(CONFIG_X86_5LEVEL) &&
> +                          !cmdline_find_option_bool("no5lvl") &&
> +                          native_cpuid_eax(0) >= 7 &&
> +                          (native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31)));
>
>         /*
>          * The code below prepares page table in trampoline memory.
> @@ -160,10 +140,10 @@ struct paging_config paging_prepare(void *rmode)
>          * We are not going to use the page table in trampoline memory if we
>          * are already in the desired paging mode.
>          */
> -       if (paging_config.l5_required == !!(native_read_cr4() & X86_CR4_LA57))
> +       if (l5_required == !!(native_read_cr4() & X86_CR4_LA57))
>                 goto out;
>
> -       if (paging_config.l5_required) {
> +       if (l5_required) {
>                 /*
>                  * For 4- to 5-level paging transition, set up current CR3 as
>                  * the first and the only entry in a new top-level page table.
> @@ -185,6 +165,45 @@ struct paging_config paging_prepare(void *rmode)
>                        (void *)src, PAGE_SIZE);
>         }
>
> +out:
> +       return l5_required;
> +}
> +
> +struct paging_config paging_prepare(void *rmode)
> +{
> +       struct paging_config paging_config = {};
> +       bool early_trampoline_alloc = 0;

false

> +
> +       /* Initialize boot_params. Required for cmdline_find_option_bool(). */
> +       boot_params = rmode;
> +
> +       /*
> +        * We only need to find trampoline placement, if we have
> +        * not already done it from libstub.
> +        */
> +
> +       paging_config.trampoline_start = find_trampoline_placement();
> +       trampoline_32bit = (unsigned long *)paging_config.trampoline_start;
> +       early_trampoline_alloc = 0;
> +

false again

And it never becomes true, nor is it used anywhere else. Can we get rid of it?

> +       /*
> +        * Preserve trampoline memory.
> +        * When trampoline is located in memory
> +        * owned by us, i.e. allocated in EFISTUB,
> +        * we don't care about previous contents
> +        * of this memory so copying can also be skipped.

Can you please reflow comments so they takes up fewer lines?

> +        */
> +       memcpy(trampoline_save, trampoline_32bit, TRAMPOLINE_32BIT_SIZE);
> +
> +       /* Clear trampoline memory first */
> +       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
> +
> +       /* Copy trampoline code in place */
> +       memcpy(trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET / sizeof(unsigned long),
> +                       &trampoline_32bit_src, TRAMPOLINE_32BIT_CODE_SIZE);
> +
> +       paging_config.l5_required = trampoline_pgtable_init(boot_params);
> +
>  out:
>         return paging_config;
>  }
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub
  2022-12-15 12:38 ` [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub Evgeniy Baskov
@ 2023-03-10 14:59   ` Ard Biesheuvel
  2023-03-11 14:49     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 14:59 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> This is required to fit more sections in PE section tables,
> since its size is restricted by zero page located at specific offset
> after the PE header.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

I'd prefer to rip this out altogether.

https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?id=9510f6f04f579b9a3f54ad762c75ab2d905e37d8

(and refer to the other thread in linux-efi@)

> ---
>  arch/x86/boot/header.S | 14 ++++++--------
>  1 file changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
> index 9338c68e7413..9fec80bc504b 100644
> --- a/arch/x86/boot/header.S
> +++ b/arch/x86/boot/header.S
> @@ -59,17 +59,16 @@ start2:
>         cld
>
>         movw    $bugger_off_msg, %si
> +       movw    $bugger_off_msg_size, %cx
>
>  msg_loop:
>         lodsb
> -       andb    %al, %al
> -       jz      bs_die
>         movb    $0xe, %ah
>         movw    $7, %bx
>         int     $0x10
> -       jmp     msg_loop
> +       decw    %cx
> +       jnz     msg_loop
>
> -bs_die:
>         # Allow the user to press a key, then reboot
>         xorw    %ax, %ax
>         int     $0x16
> @@ -90,10 +89,9 @@ bs_die:
>
>         .section ".bsdata", "a"
>  bugger_off_msg:
> -       .ascii  "Use a boot loader.\r\n"
> -       .ascii  "\n"
> -       .ascii  "Remove disk and press any key to reboot...\r\n"
> -       .byte   0
> +       .ascii  "Use a boot loader. "
> +       .ascii  "Press a key to reboot"
> +       .set    bugger_off_msg_size, . - bugger_off_msg
>
>  #ifdef CONFIG_EFI_STUB
>  pe_header:
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 18/26] tools/include: Add simplified version of pe.h
  2022-12-15 12:38 ` [PATCH v4 18/26] tools/include: Add simplified version of pe.h Evgeniy Baskov
@ 2023-03-10 15:01   ` Ard Biesheuvel
  0 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 15:01 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> This is needed to remove magic numbers from x86 bzImage building tool
> (arch/x86/boot/tools/build.c).
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

Acked-by: Ard Biesheuvel <ardb@kernel.org>

> ---
>  tools/include/linux/pe.h | 150 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 150 insertions(+)
>  create mode 100644 tools/include/linux/pe.h
>
> diff --git a/tools/include/linux/pe.h b/tools/include/linux/pe.h
> new file mode 100644
> index 000000000000..41c09ec371d8
> --- /dev/null
> +++ b/tools/include/linux/pe.h
> @@ -0,0 +1,150 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Simplified version of include/linux/pe.h:
> + *  Copyright 2011 Red Hat, Inc. All rights reserved.
> + *  Author(s): Peter Jones <pjones@redhat.com>
> + */
> +#ifndef __LINUX_PE_H
> +#define __LINUX_PE_H
> +
> +#include <linux/types.h>
> +
> +#define        IMAGE_FILE_MACHINE_I386         0x014c
> +
> +#define IMAGE_SCN_CNT_CODE     0x00000020 /* .text */
> +#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040 /* .data */
> +#define IMAGE_SCN_ALIGN_4096BYTES 0x00d00000
> +#define IMAGE_SCN_MEM_DISCARDABLE 0x02000000 /* scn can be discarded */
> +#define IMAGE_SCN_MEM_EXECUTE  0x20000000 /* can be executed as code */
> +#define IMAGE_SCN_MEM_READ     0x40000000 /* readable */
> +#define IMAGE_SCN_MEM_WRITE    0x80000000 /* writeable */
> +
> +#define MZ_HEADER_PEADDR_OFFSET 0x3c
> +
> +struct pe_hdr {
> +       uint32_t magic;         /* PE magic */
> +       uint16_t machine;       /* machine type */
> +       uint16_t sections;      /* number of sections */
> +       uint32_t timestamp;     /* time_t */
> +       uint32_t symbol_table;  /* symbol table offset */
> +       uint32_t symbols;       /* number of symbols */
> +       uint16_t opt_hdr_size;  /* size of optional header */
> +       uint16_t flags;         /* flags */
> +};
> +
> +/* the fact that pe32 isn't padded where pe32+ is 64-bit means union won't
> + * work right.  vomit. */
> +struct pe32_opt_hdr {
> +       /* "standard" header */
> +       uint16_t magic;         /* file type */
> +       uint8_t  ld_major;      /* linker major version */
> +       uint8_t  ld_minor;      /* linker minor version */
> +       uint32_t text_size;     /* size of text section(s) */
> +       uint32_t data_size;     /* size of data section(s) */
> +       uint32_t bss_size;      /* size of bss section(s) */
> +       uint32_t entry_point;   /* file offset of entry point */
> +       uint32_t code_base;     /* relative code addr in ram */
> +       uint32_t data_base;     /* relative data addr in ram */
> +       /* "windows" header */
> +       uint32_t image_base;    /* preferred load address */
> +       uint32_t section_align; /* alignment in bytes */
> +       uint32_t file_align;    /* file alignment in bytes */
> +       uint16_t os_major;      /* major OS version */
> +       uint16_t os_minor;      /* minor OS version */
> +       uint16_t image_major;   /* major image version */
> +       uint16_t image_minor;   /* minor image version */
> +       uint16_t subsys_major;  /* major subsystem version */
> +       uint16_t subsys_minor;  /* minor subsystem version */
> +       uint32_t win32_version; /* reserved, must be 0 */
> +       uint32_t image_size;    /* image size */
> +       uint32_t header_size;   /* header size rounded up to
> +                                  file_align */
> +       uint32_t csum;          /* checksum */
> +       uint16_t subsys;        /* subsystem */
> +       uint16_t dll_flags;     /* more flags! */
> +       uint32_t stack_size_req;/* amt of stack requested */
> +       uint32_t stack_size;    /* amt of stack required */
> +       uint32_t heap_size_req; /* amt of heap requested */
> +       uint32_t heap_size;     /* amt of heap required */
> +       uint32_t loader_flags;  /* reserved, must be 0 */
> +       uint32_t data_dirs;     /* number of data dir entries */
> +};
> +
> +struct pe32plus_opt_hdr {
> +       uint16_t magic;         /* file type */
> +       uint8_t  ld_major;      /* linker major version */
> +       uint8_t  ld_minor;      /* linker minor version */
> +       uint32_t text_size;     /* size of text section(s) */
> +       uint32_t data_size;     /* size of data section(s) */
> +       uint32_t bss_size;      /* size of bss section(s) */
> +       uint32_t entry_point;   /* file offset of entry point */
> +       uint32_t code_base;     /* relative code addr in ram */
> +       /* "windows" header */
> +       uint64_t image_base;    /* preferred load address */
> +       uint32_t section_align; /* alignment in bytes */
> +       uint32_t file_align;    /* file alignment in bytes */
> +       uint16_t os_major;      /* major OS version */
> +       uint16_t os_minor;      /* minor OS version */
> +       uint16_t image_major;   /* major image version */
> +       uint16_t image_minor;   /* minor image version */
> +       uint16_t subsys_major;  /* major subsystem version */
> +       uint16_t subsys_minor;  /* minor subsystem version */
> +       uint32_t win32_version; /* reserved, must be 0 */
> +       uint32_t image_size;    /* image size */
> +       uint32_t header_size;   /* header size rounded up to
> +                                  file_align */
> +       uint32_t csum;          /* checksum */
> +       uint16_t subsys;        /* subsystem */
> +       uint16_t dll_flags;     /* more flags! */
> +       uint64_t stack_size_req;/* amt of stack requested */
> +       uint64_t stack_size;    /* amt of stack required */
> +       uint64_t heap_size_req; /* amt of heap requested */
> +       uint64_t heap_size;     /* amt of heap required */
> +       uint32_t loader_flags;  /* reserved, must be 0 */
> +       uint32_t data_dirs;     /* number of data dir entries */
> +};
> +
> +struct data_dirent {
> +       uint32_t virtual_address;       /* relative to load address */
> +       uint32_t size;
> +};
> +
> +struct data_directory {
> +       struct data_dirent exports;             /* .edata */
> +       struct data_dirent imports;             /* .idata */
> +       struct data_dirent resources;           /* .rsrc */
> +       struct data_dirent exceptions;          /* .pdata */
> +       struct data_dirent certs;               /* certs */
> +       struct data_dirent base_relocations;    /* .reloc */
> +       struct data_dirent debug;               /* .debug */
> +       struct data_dirent arch;                /* reservered */
> +       struct data_dirent global_ptr;          /* global pointer reg. Size=0 */
> +       struct data_dirent tls;                 /* .tls */
> +       struct data_dirent load_config;         /* load configuration structure */
> +       struct data_dirent bound_imports;       /* no idea */
> +       struct data_dirent import_addrs;        /* import address table */
> +       struct data_dirent delay_imports;       /* delay-load import table */
> +       struct data_dirent clr_runtime_hdr;     /* .cor (object only) */
> +       struct data_dirent reserved;
> +};
> +
> +struct section_header {
> +       char name[8];                   /* name or "/12\0" string tbl offset */
> +       uint32_t virtual_size;          /* size of loaded section in ram */
> +       uint32_t virtual_address;       /* relative virtual address */
> +       uint32_t raw_data_size;         /* size of the section */
> +       uint32_t data_addr;             /* file pointer to first page of sec */
> +       uint32_t relocs;                /* file pointer to relocation entries */
> +       uint32_t line_numbers;          /* line numbers! */
> +       uint16_t num_relocs;            /* number of relocations */
> +       uint16_t num_lin_numbers;       /* srsly. */
> +       uint32_t flags;
> +};
> +
> +struct coff_reloc {
> +       uint32_t virtual_address;
> +       uint32_t symbol_table_index;
> +       uint16_t data;
> +};
> +
> +#endif /* __LINUX_PE_H */
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub
  2022-12-15 12:38 ` [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub Evgeniy Baskov
  2023-03-09 16:00   ` Ard Biesheuvel
  2023-03-09 16:49   ` Ard Biesheuvel
@ 2023-03-10 15:08   ` Ard Biesheuvel
  2 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 15:08 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Doing it that way allows setting up stricter memory attributes,
> simplifies boot code path and removes potential relocation
> of kernel image.
>
> Wire up required interfaces and minimally initialize zero page
> fields needed for it to function correctly.
>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

Some more comments - apologies for the multi stage approach ...

> ---
>  arch/x86/boot/compressed/head_32.S            |  50 ++++-
>  arch/x86/boot/compressed/head_64.S            |  58 ++++-
>  drivers/firmware/efi/Kconfig                  |   2 +
>  drivers/firmware/efi/libstub/Makefile         |   2 +-
>  .../firmware/efi/libstub/x86-extract-direct.c | 208 ++++++++++++++++++
>  drivers/firmware/efi/libstub/x86-stub.c       | 119 +---------
>  drivers/firmware/efi/libstub/x86-stub.h       |  14 ++
>  7 files changed, 338 insertions(+), 115 deletions(-)
>  create mode 100644 drivers/firmware/efi/libstub/x86-extract-direct.c
>  create mode 100644 drivers/firmware/efi/libstub/x86-stub.h
>
...
> diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
> index 043ca31c114e..f50c2a84a754 100644
> --- a/drivers/firmware/efi/Kconfig
> +++ b/drivers/firmware/efi/Kconfig
> @@ -58,6 +58,8 @@ config EFI_DXE_MEM_ATTRIBUTES
>           Use DXE services to check and alter memory protection
>           attributes during boot via EFISTUB to ensure that memory
>           ranges used by the kernel are writable and executable.
> +         This option also enables stricter memory attributes
> +         on compressed kernel PE image.

images

>
>  config EFI_PARAMS_FROM_FDT
>         bool
> diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
> index be8b8c6e8b40..99b81c95344c 100644
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -88,7 +88,7 @@ lib-$(CONFIG_EFI_GENERIC_STUB)        += efi-stub.o string.o intrinsics.o systable.o \
>
>  lib-$(CONFIG_ARM)              += arm32-stub.o
>  lib-$(CONFIG_ARM64)            += arm64.o arm64-stub.o arm64-entry.o smbios.o
> -lib-$(CONFIG_X86)              += x86-stub.o
> +lib-$(CONFIG_X86)              += x86-stub.o x86-extract-direct.o
>  lib-$(CONFIG_RISCV)            += riscv.o riscv-stub.o
>  lib-$(CONFIG_LOONGARCH)                += loongarch.o loongarch-stub.o
>
> diff --git a/drivers/firmware/efi/libstub/x86-extract-direct.c b/drivers/firmware/efi/libstub/x86-extract-direct.c
> new file mode 100644
> index 000000000000..4ecbc4a9b3ed
> --- /dev/null
> +++ b/drivers/firmware/efi/libstub/x86-extract-direct.c
> @@ -0,0 +1,208 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <linux/acpi.h>
> +#include <linux/efi.h>
> +#include <linux/elf.h>
> +#include <linux/stddef.h>
> +
> +#include <asm/efi.h>
> +#include <asm/e820/types.h>
> +#include <asm/desc.h>
> +#include <asm/boot.h>
> +#include <asm/bootparam_utils.h>
> +#include <asm/shared/extract.h>
> +#include <asm/shared/pgtable.h>
> +
> +#include "efistub.h"
> +#include "x86-stub.h"
> +
> +static efi_handle_t image_handle;
> +
> +static void do_puthex(unsigned long value)
> +{
> +       efi_printk("%08lx", value);
> +}
> +
> +static void do_putstr(const char *msg)
> +{
> +       efi_printk("%s", msg);
> +}
> +
> +static unsigned long do_map_range(unsigned long start,
> +                                 unsigned long end,
> +                                 unsigned int flags)
> +{
> +       efi_status_t status;
> +

Please drop this newline.

> +       unsigned long size = end - start;
> +
> +       if (flags & MAP_ALLOC) {
> +               unsigned long addr;
> +
> +               status = efi_low_alloc_above(size, CONFIG_PHYSICAL_ALIGN,
> +                                            &addr, start);
> +               if (status != EFI_SUCCESS) {
> +                       efi_err("Unable to allocate memory for uncompressed kernel");
> +                       efi_exit(image_handle, EFI_OUT_OF_RESOURCES);
> +               }
> +

OK, so this is the place where the chosen address for deompressing the
kernel is actually allocated and carved out in the EFI memory map.
Could you add a comment here so other folks won't be confused like I
was how this is put together?

> +               if (start != addr) {
> +                       efi_debug("Unable to allocate at given address"
> +                                 " (desired=0x%lx, actual=0x%lx)",
> +                                 (unsigned long)start, addr);
> +                       start = addr;
> +               }
> +       }
> +
> +       if ((flags & (MAP_PROTECT | MAP_ALLOC)) &&
> +           IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> +               unsigned long attr = 0;
> +
> +               if (!(flags & MAP_EXEC))
> +                       attr |= EFI_MEMORY_XP;
> +
> +               if (!(flags & MAP_WRITE))
> +                       attr |= EFI_MEMORY_RO;
> +
> +               status = efi_adjust_memory_range_protection(start, size, attr);
> +               if (status != EFI_SUCCESS)
> +                       efi_err("Unable to protect memory range");
> +       }
> +
> +       return start;
> +}
> +
> +/*
> + * Trampoline takes 3 pages and can be loaded in first megabyte of memory
> + * with its end placed between 0 and 640k where BIOS might start.
> + * (see arch/x86/boot/compressed/pgtable_64.c)
> + */
> +
> +#ifdef CONFIG_64BIT
> +static efi_status_t prepare_trampoline(void)
> +{
> +       efi_status_t status;
> +
> +       status = efi_allocate_pages(TRAMPOLINE_32BIT_SIZE,
> +                                   (unsigned long *)&trampoline_32bit,
> +                                   TRAMPOLINE_32BIT_PLACEMENT_MAX);
> +
> +       if (status != EFI_SUCCESS)
> +               return status;
> +
> +       unsigned long trampoline_start = (unsigned long)trampoline_32bit;
> +

Please put all variable declarations at the start of the block

> +       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
> +
> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> +               /* First page of trampoline is a top level page table */
> +               efi_adjust_memory_range_protection(trampoline_start,
> +                                                  PAGE_SIZE,
> +                                                  EFI_MEMORY_XP);
> +       }
> +
> +       /* Second page of trampoline is the code (with a padding) */
> +
> +       void *caddr = (void *)trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET;
> +

same here

> +       memcpy(caddr, trampoline_32bit_src, TRAMPOLINE_32BIT_CODE_SIZE);
> +
> +       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES)) {
> +               efi_adjust_memory_range_protection((unsigned long)caddr,
> +                                                  PAGE_SIZE,
> +                                                  EFI_MEMORY_RO);
> +
> +               /* And the last page of trampoline is the stack */
> +
> +               efi_adjust_memory_range_protection(trampoline_start + 2 * PAGE_SIZE,
> +                                                  PAGE_SIZE,
> +                                                  EFI_MEMORY_XP);
> +       }
> +
> +       return EFI_SUCCESS;
> +}
> +#else
> +static inline efi_status_t prepare_trampoline(void)
> +{
> +       return EFI_SUCCESS;
> +}
> +#endif
> +
> +static efi_status_t init_loader_data(efi_handle_t handle,
> +                                    struct boot_params *params,
> +                                    struct efi_boot_memmap **map)
> +{
> +       struct efi_info *efi = (void *)&params->efi_info;
> +       efi_status_t status;
> +
> +       status = efi_get_memory_map(map, false);
> +
> +       if (status != EFI_SUCCESS) {
> +               efi_err("Unable to get EFI memory map...\n");
> +               return status;
> +       }
> +
> +       const char *signature = efi_is_64bit() ? EFI64_LOADER_SIGNATURE
> +                                              : EFI32_LOADER_SIGNATURE;
> +

Move this to the start

> +       memcpy(&efi->efi_loader_signature, signature, sizeof(__u32));
> +
> +       efi->efi_memdesc_size = (*map)->desc_size;
> +       efi->efi_memdesc_version = (*map)->desc_ver;
> +       efi->efi_memmap_size = (*map)->map_size;
> +
> +       efi_set_u64_split((unsigned long)(*map)->map,
> +                         &efi->efi_memmap, &efi->efi_memmap_hi);
> +
> +       efi_set_u64_split((unsigned long)efi_system_table,
> +                         &efi->efi_systab, &efi->efi_systab_hi);
> +
> +       image_handle = handle;
> +
> +       return EFI_SUCCESS;
> +}
> +
> +static void free_loader_data(struct boot_params *params, struct efi_boot_memmap *map)
> +{
> +       struct efi_info *efi = (void *)&params->efi_info;
> +
> +       efi_bs_call(free_pool, map);
> +
> +       efi->efi_memdesc_size = 0;
> +       efi->efi_memdesc_version = 0;
> +       efi->efi_memmap_size = 0;
> +       efi_set_u64_split(0, &efi->efi_memmap, &efi->efi_memmap_hi);
> +}
> +
> +extern unsigned char input_data[];
> +extern unsigned int input_len, output_len;
> +
> +unsigned long extract_kernel_direct(efi_handle_t handle, struct boot_params *params)
> +{
> +
> +       void *res;
> +       efi_status_t status;
> +       struct efi_extract_callbacks cb = { 0 };
> +
> +       status = prepare_trampoline();
> +
> +       if (status != EFI_SUCCESS)
> +               return 0;
> +
> +       /* Prepare environment for do_extract_kernel() call */
> +       struct efi_boot_memmap *map = NULL;

Move this to the start.

> +       status = init_loader_data(handle, params, &map);
> +
> +       if (status != EFI_SUCCESS)
> +               return 0;
> +
> +       cb.puthex = do_puthex;
> +       cb.putstr = do_putstr;
> +       cb.map_range = do_map_range;
> +
> +       res = efi_extract_kernel(params, &cb, input_data, input_len, output_len);
> +
> +       free_loader_data(params, map);
> +
> +       return (unsigned long)res;
> +}
> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
> index 7fb1eff88a18..1d1ab1911fd3 100644
> --- a/drivers/firmware/efi/libstub/x86-stub.c
> +++ b/drivers/firmware/efi/libstub/x86-stub.c
> @@ -17,6 +17,7 @@
>  #include <asm/boot.h>
>
>  #include "efistub.h"
> +#include "x86-stub.h"
>
>  /* Maximum physical address for 64-bit kernel with 4-level paging */
>  #define MAXMEM_X86_64_4LEVEL (1ull << 46)
> @@ -24,7 +25,7 @@
>  const efi_system_table_t *efi_system_table;
>  const efi_dxe_services_table_t *efi_dxe_table;
>  u32 image_offset __section(".data");
> -static efi_loaded_image_t *image = NULL;
> +static efi_loaded_image_t *image __section(".data");
>
>  static efi_status_t
>  preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom)
> @@ -212,55 +213,9 @@ static void retrieve_apple_device_properties(struct boot_params *boot_params)
>         }
>  }
>
> -/*
> - * Trampoline takes 2 pages and can be loaded in first megabyte of memory
> - * with its end placed between 128k and 640k where BIOS might start.
> - * (see arch/x86/boot/compressed/pgtable_64.c)
> - *
> - * We cannot find exact trampoline placement since memory map
> - * can be modified by UEFI, and it can alter the computed address.
> - */
> -
> -#define TRAMPOLINE_PLACEMENT_BASE ((128 - 8)*1024)
> -#define TRAMPOLINE_PLACEMENT_SIZE (640*1024 - (128 - 8)*1024)
> -
> -void startup_32(struct boot_params *boot_params);
> -
> -static void
> -setup_memory_protection(unsigned long image_base, unsigned long image_size)
> -{
> -       /*
> -        * Allow execution of possible trampoline used
> -        * for switching between 4- and 5-level page tables
> -        * and relocated kernel image.
> -        */
> -
> -       efi_adjust_memory_range_protection(TRAMPOLINE_PLACEMENT_BASE,
> -                                          TRAMPOLINE_PLACEMENT_SIZE, 0);
> -
> -#ifdef CONFIG_64BIT
> -       if (image_base != (unsigned long)startup_32)
> -               efi_adjust_memory_range_protection(image_base, image_size, 0);
> -#else
> -       /*
> -        * Clear protection flags on a whole range of possible
> -        * addresses used for KASLR. We don't need to do that
> -        * on x86_64, since KASLR/extraction is performed after
> -        * dedicated identity page tables are built and we only
> -        * need to remove possible protection on relocated image
> -        * itself disregarding further relocations.
> -        */
> -       efi_adjust_memory_range_protection(LOAD_PHYSICAL_ADDR,
> -                                          KERNEL_IMAGE_SIZE - LOAD_PHYSICAL_ADDR,
> -                                          0);
> -#endif
> -}
> -
>  static const efi_char16_t apple[] = L"Apple";
>
> -static void setup_quirks(struct boot_params *boot_params,
> -                        unsigned long image_base,
> -                        unsigned long image_size)
> +static void setup_quirks(struct boot_params *boot_params)
>  {
>         efi_char16_t *fw_vendor = (efi_char16_t *)(unsigned long)
>                 efi_table_attr(efi_system_table, fw_vendor);
> @@ -269,9 +224,6 @@ static void setup_quirks(struct boot_params *boot_params,
>                 if (IS_ENABLED(CONFIG_APPLE_PROPERTIES))
>                         retrieve_apple_device_properties(boot_params);
>         }
> -
> -       if (IS_ENABLED(CONFIG_EFI_DXE_MEM_ATTRIBUTES))
> -               setup_memory_protection(image_base, image_size);
>  }
>
>  /*
> @@ -384,7 +336,7 @@ static void setup_graphics(struct boot_params *boot_params)
>  }
>
>
> -static void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
>  {
>         efi_bs_call(exit, handle, status, 0, NULL);
>         for(;;)
> @@ -707,8 +659,7 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle)
>  }
>
>  /*
> - * On success, we return the address of startup_32, which has potentially been
> - * relocated by efi_relocate_kernel.
> + * On success, we return extracted kernel entry point.
>   * On failure, we exit to the firmware via efi_exit instead of returning.
>   */
>  asmlinkage unsigned long efi_main(efi_handle_t handle,
> @@ -733,60 +684,6 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
>                 efi_dxe_table = NULL;
>         }
>
> -       /*
> -        * If the kernel isn't already loaded at a suitable address,
> -        * relocate it.
> -        *
> -        * It must be loaded above LOAD_PHYSICAL_ADDR.
> -        *
> -        * The maximum address for 64-bit is 1 << 46 for 4-level paging. This
> -        * is defined as the macro MAXMEM, but unfortunately that is not a
> -        * compile-time constant if 5-level paging is configured, so we instead
> -        * define our own macro for use here.
> -        *
> -        * For 32-bit, the maximum address is complicated to figure out, for
> -        * now use KERNEL_IMAGE_SIZE, which will be 512MiB, the same as what
> -        * KASLR uses.
> -        *
> -        * Also relocate it if image_offset is zero, i.e. the kernel wasn't
> -        * loaded by LoadImage, but rather by a bootloader that called the
> -        * handover entry. The reason we must always relocate in this case is
> -        * to handle the case of systemd-boot booting a unified kernel image,
> -        * which is a PE executable that contains the bzImage and an initrd as
> -        * COFF sections. The initrd section is placed after the bzImage
> -        * without ensuring that there are at least init_size bytes available
> -        * for the bzImage, and thus the compressed kernel's startup code may
> -        * overwrite the initrd unless it is moved out of the way.
> -        */
> -
> -       buffer_start = ALIGN(bzimage_addr - image_offset,
> -                            hdr->kernel_alignment);
> -       buffer_end = buffer_start + hdr->init_size;
> -
> -       if ((buffer_start < LOAD_PHYSICAL_ADDR)                              ||
> -           (IS_ENABLED(CONFIG_X86_32) && buffer_end > KERNEL_IMAGE_SIZE)    ||
> -           (IS_ENABLED(CONFIG_X86_64) && buffer_end > MAXMEM_X86_64_4LEVEL) ||
> -           (image_offset == 0)) {
> -               extern char _bss[];
> -
> -               status = efi_relocate_kernel(&bzimage_addr,
> -                                            (unsigned long)_bss - bzimage_addr,
> -                                            hdr->init_size,
> -                                            hdr->pref_address,
> -                                            hdr->kernel_alignment,
> -                                            LOAD_PHYSICAL_ADDR);
> -               if (status != EFI_SUCCESS) {
> -                       efi_err("efi_relocate_kernel() failed!\n");
> -                       goto fail;
> -               }
> -               /*
> -                * Now that we've copied the kernel elsewhere, we no longer
> -                * have a set up block before startup_32(), so reset image_offset
> -                * to zero in case it was set earlier.
> -                */
> -               image_offset = 0;
> -       }
> -
>  #ifdef CONFIG_CMDLINE_BOOL
>         status = efi_parse_options(CONFIG_CMDLINE);
>         if (status != EFI_SUCCESS) {
> @@ -843,7 +740,11 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
>
>         setup_efi_pci(boot_params);
>
> -       setup_quirks(boot_params, bzimage_addr, buffer_end - buffer_start);
> +       setup_quirks(boot_params);
> +
> +       bzimage_addr = extract_kernel_direct(handle, boot_params);
> +       if (!bzimage_addr)
> +               goto fail;
>
>         status = exit_boot(boot_params, handle);
>         if (status != EFI_SUCCESS) {
> diff --git a/drivers/firmware/efi/libstub/x86-stub.h b/drivers/firmware/efi/libstub/x86-stub.h
> new file mode 100644
> index 000000000000..baecc7c6e602
> --- /dev/null
> +++ b/drivers/firmware/efi/libstub/x86-stub.h
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _DRIVERS_FIRMWARE_EFI_X86STUB_H
> +#define _DRIVERS_FIRMWARE_EFI_X86STUB_H
> +
> +#include <linux/efi.h>
> +
> +#include <asm/bootparam.h>
> +
> +void __noreturn efi_exit(efi_handle_t handle, efi_status_t status);
> +unsigned long extract_kernel_direct(efi_handle_t handle, struct boot_params *boot_params);
> +void startup_32(struct boot_params *boot_params);
> +
> +#endif
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 20/26] x86/build: Make generated PE more spec compliant
  2022-12-15 12:38 ` [PATCH v4 20/26] x86/build: Make generated PE more spec compliant Evgeniy Baskov
@ 2023-03-10 15:17   ` Ard Biesheuvel
  2023-03-11 15:02     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 15:17 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Currently kernel image is not fully compliant PE image, so it may
> fail to boot with stricter implementations of UEFI PE loaders.
>
> Set minimal alignments and sizes specified by PE documentation [1]
> referenced by UEFI specification [2]. Align PE header to 8 bytes.
>
> Generate PE sections dynamically. This simplifies code, since with
> current implementation all of the sections needs to be defined in
> header.S, where most section header fields do not hold valid values,
> except for their names. Before the change, it also held flags,
> but now flags depend on kernel configuration and it is simpler
> to set them from build.c too.
>
> Setup sections protection. Since we cannot fit every needed section,
> set a part of protection flags dynamically during initialization.
> This step is omitted if CONFIG_EFI_DXE_MEM_ATTRIBUTES is not set.
>
> [1] https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
> [2] https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf
>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

I would prefer it if we didn't rewrite the build tool this way.

Having the sections in header.S in the order they appear in the binary
is rather useful, and I don't think we should manipulate the section
flags based on whether CONFIG_DXE_MEM_ATTRIBUTES is set. I also don't
think we need more than .text / .,data (as discussed in the other
thread on linux-efi@)

Furthermore, I had a look at the audk PE loader [0], and I think it is
being overly pedantic.

The PE/COFF spec does not require that all sections are virtually
contiguous, and it does not require that the file content is
completely covered by either the header or by a section.

So what I would prefer to do is the following:

Sections:
Idx Name          Size     VMA              Type
  0 .reloc        00000200 0000000000002000 DATA
  1 .compat       00000200 0000000000003000 DATA
  2 .text         00bee000 0000000000004000 TEXT
  3 .data         00002200 0000000000bf2000 DATA

using 4k section alignment and 512 byte file alignment, and a header
size of 0x200 as before (This requires my patch that allows the setup
header to remain unmapped when running the stub [1])

The reloc and compat payloads are placed at the end of the setup data
as before, but increased in size to 512 bytes each, and then mapped
non-1:1 into the RVA space.

This works happily with both the existing PE loader as well as the
audk one, but with the pedantic flags disabled.



[0] https://github.com/acidanthera/audk
[1] https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?id=84412899c16c65af13dac305aa01a5a85e08c69e

> ---
>  arch/x86/boot/Makefile                  |   2 +-
>  arch/x86/boot/header.S                  |  96 +--------
>  arch/x86/boot/tools/build.c             | 270 +++++++++++++-----------
>  drivers/firmware/efi/libstub/x86-stub.c |   7 +-
>  4 files changed, 161 insertions(+), 214 deletions(-)
>
> diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
> index 9e38ffaadb5d..bed78c82238e 100644
> --- a/arch/x86/boot/Makefile
> +++ b/arch/x86/boot/Makefile
> @@ -91,7 +91,7 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
>
>  SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
>
> -sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [a-zA-Z] \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|efi32_pe_entry\|input_data\|kernel_info\|_end\|_ehead\|_text\|z_.*\)$$/\#define ZO_\2 0x\1/p'
> +sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [a-zA-Z] \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|efi32_pe_entry\|input_data\|kernel_info\|_end\|_ehead\|_text\|_rodata\|z_.*\)$$/\#define ZO_\2 0x\1/p'
>
>  quiet_cmd_zoffset = ZOFFSET $@
>        cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
> diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
> index 9fec80bc504b..07e31ddb074f 100644
> --- a/arch/x86/boot/header.S
> +++ b/arch/x86/boot/header.S
> @@ -94,6 +94,7 @@ bugger_off_msg:
>         .set    bugger_off_msg_size, . - bugger_off_msg
>
>  #ifdef CONFIG_EFI_STUB
> +       .align 8
>  pe_header:
>         .long   PE_MAGIC
>
> @@ -107,7 +108,7 @@ coff_header:
>         .set    pe_opt_magic, PE_OPT_MAGIC_PE32PLUS
>         .word   IMAGE_FILE_MACHINE_AMD64
>  #endif
> -       .word   section_count                   # nr_sections
> +       .word   0                               # nr_sections
>         .long   0                               # TimeDateStamp
>         .long   0                               # PointerToSymbolTable
>         .long   1                               # NumberOfSymbols
> @@ -131,7 +132,7 @@ optional_header:
>         # Filled in by build.c
>         .long   0x0000                          # AddressOfEntryPoint
>
> -       .long   0x0200                          # BaseOfCode
> +       .long   0x1000                          # BaseOfCode
>  #ifdef CONFIG_X86_32
>         .long   0                               # data
>  #endif
> @@ -144,8 +145,8 @@ extra_header_fields:
>  #else
>         .quad   image_base                      # ImageBase
>  #endif
> -       .long   0x20                            # SectionAlignment
> -       .long   0x20                            # FileAlignment
> +       .long   0x1000                          # SectionAlignment
> +       .long   0x200                           # FileAlignment
>         .word   0                               # MajorOperatingSystemVersion
>         .word   0                               # MinorOperatingSystemVersion
>         .word   LINUX_EFISTUB_MAJOR_VERSION     # MajorImageVersion
> @@ -188,91 +189,14 @@ extra_header_fields:
>         .quad   0                               # CertificationTable
>         .quad   0                               # BaseRelocationTable
>
> -       # Section table
> -section_table:
> -       #
> -       # The offset & size fields are filled in by build.c.
> -       #
> -       .ascii  ".setup"
> -       .byte   0
> -       .byte   0
> -       .long   0
> -       .long   0x0                             # startup_{32,64}
> -       .long   0                               # Size of initialized data
> -                                               # on disk
> -       .long   0x0                             # startup_{32,64}
> -       .long   0                               # PointerToRelocations
> -       .long   0                               # PointerToLineNumbers
> -       .word   0                               # NumberOfRelocations
> -       .word   0                               # NumberOfLineNumbers
> -       .long   IMAGE_SCN_CNT_CODE              | \
> -               IMAGE_SCN_MEM_READ              | \
> -               IMAGE_SCN_MEM_EXECUTE           | \
> -               IMAGE_SCN_ALIGN_16BYTES         # Characteristics
> -
> -       #
> -       # The EFI application loader requires a relocation section
> -       # because EFI applications must be relocatable. The .reloc
> -       # offset & size fields are filled in by build.c.
>         #
> -       .ascii  ".reloc"
> -       .byte   0
> -       .byte   0
> -       .long   0
> -       .long   0
> -       .long   0                               # SizeOfRawData
> -       .long   0                               # PointerToRawData
> -       .long   0                               # PointerToRelocations
> -       .long   0                               # PointerToLineNumbers
> -       .word   0                               # NumberOfRelocations
> -       .word   0                               # NumberOfLineNumbers
> -       .long   IMAGE_SCN_CNT_INITIALIZED_DATA  | \
> -               IMAGE_SCN_MEM_READ              | \
> -               IMAGE_SCN_MEM_DISCARDABLE       | \
> -               IMAGE_SCN_ALIGN_1BYTES          # Characteristics
> -
> -#ifdef CONFIG_EFI_MIXED
> -       #
> -       # The offset & size fields are filled in by build.c.
> +       # Section table
> +       # It is generated by build.c and here we just need
> +       # to reserve some space for sections
>         #
> -       .asciz  ".compat"
> -       .long   0
> -       .long   0x0
> -       .long   0                               # Size of initialized data
> -                                               # on disk
> -       .long   0x0
> -       .long   0                               # PointerToRelocations
> -       .long   0                               # PointerToLineNumbers
> -       .word   0                               # NumberOfRelocations
> -       .word   0                               # NumberOfLineNumbers
> -       .long   IMAGE_SCN_CNT_INITIALIZED_DATA  | \
> -               IMAGE_SCN_MEM_READ              | \
> -               IMAGE_SCN_MEM_DISCARDABLE       | \
> -               IMAGE_SCN_ALIGN_1BYTES          # Characteristics
> -#endif
> +section_table:
> +       .fill 40*5, 1, 0
>
> -       #
> -       # The offset & size fields are filled in by build.c.
> -       #
> -       .ascii  ".text"
> -       .byte   0
> -       .byte   0
> -       .byte   0
> -       .long   0
> -       .long   0x0                             # startup_{32,64}
> -       .long   0                               # Size of initialized data
> -                                               # on disk
> -       .long   0x0                             # startup_{32,64}
> -       .long   0                               # PointerToRelocations
> -       .long   0                               # PointerToLineNumbers
> -       .word   0                               # NumberOfRelocations
> -       .word   0                               # NumberOfLineNumbers
> -       .long   IMAGE_SCN_CNT_CODE              | \
> -               IMAGE_SCN_MEM_READ              | \
> -               IMAGE_SCN_MEM_EXECUTE           | \
> -               IMAGE_SCN_ALIGN_16BYTES         # Characteristics
> -
> -       .set    section_count, (. - section_table) / 40
>  #endif /* CONFIG_EFI_STUB */
>
>         # Kernel attributes; used by setup.  This is part 1 of the
> diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
> index fbc5315af032..ac6159b76a13 100644
> --- a/arch/x86/boot/tools/build.c
> +++ b/arch/x86/boot/tools/build.c
> @@ -61,8 +61,10 @@ uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
>
>  #ifdef CONFIG_EFI_MIXED
>  #define PECOFF_COMPAT_RESERVE 0x20
> +#define COMPAT_SECTION_SIZE 0x8
>  #else
>  #define PECOFF_COMPAT_RESERVE 0x0
> +#define COMPAT_SECTION_SIZE 0x0
>  #endif
>
>  #define RELOC_SECTION_SIZE 10
> @@ -117,6 +119,7 @@ static unsigned long efi_pe_entry;
>  static unsigned long efi32_pe_entry;
>  static unsigned long kernel_info;
>  static unsigned long startup_64;
> +static unsigned long _rodata;
>  static unsigned long _ehead;
>  static unsigned long _end;
>
> @@ -258,122 +261,177 @@ static void *map_output_file(const char *path, size_t size)
>
>  #ifdef CONFIG_EFI_STUB
>
> -static void update_pecoff_section_header_fields(char *section_name, uint32_t vma,
> -                                               uint32_t size, uint32_t datasz,
> -                                               uint32_t offset)
> +static unsigned int reloc_offset;
> +static unsigned int compat_offset;
> +
> +#define MAX_SECTIONS 5
> +
> +static void emit_pecoff_section(const char *section_name, unsigned int size,
> +                               unsigned int bss, unsigned int *file_offset,
> +                               unsigned int *mem_offset, uint32_t flags)
>  {
> -       unsigned int pe_header;
> +       unsigned int section_memsz, section_filesz;
> +       unsigned int name_len;
>         unsigned short num_sections;
> +       struct pe_hdr *hdr = get_pe_header(buf);
>         struct section_header *section;
>
> -       struct pe_hdr *hdr = get_pe_header(buf);
>         num_sections = get_unaligned_le16(&hdr->sections);
> -       section = get_sections(buf);
> +       if (num_sections >= MAX_SECTIONS)
> +               die("Not enough space to generate all sections");
>
> -       while (num_sections > 0) {
> -               if (strncmp(section->name, section_name, 8) == 0) {
> -                       /* section header size field */
> -                       put_unaligned_le32(size, &section->virtual_size);
> +       section = get_sections(buf) + num_sections;
>
> -                       /* section header vma field */
> -                       put_unaligned_le32(vma, &section->virtual_address);
> +       if ((size & (FILE_ALIGNMENT - 1)) || (bss & (FILE_ALIGNMENT - 1)))
> +               die("Section '%s' is improperly aligned", section_name);
>
> -                       /* section header 'size of initialised data' field */
> -                       put_unaligned_le32(datasz, &section->raw_data_size);
> +       section_memsz = round_up(size + bss, SECTION_ALIGNMENT);
> +       section_filesz = round_up(size, FILE_ALIGNMENT);
>
> -                       /* section header 'file offset' field */
> -                       put_unaligned_le32(offset, &section->data_addr);
> +       /* Zero out all section fields */
> +       memset(section, 0, sizeof(*section));
>
> -                       break;
> -               }
> -               section++;
> -               num_sections--;
> -       }
> -}
> +       name_len = strlen(section_name);
> +       if (name_len > sizeof(section->name))
> +               name_len = sizeof(section_name);
>
> -static void update_pecoff_section_header(char *section_name, uint32_t offset, uint32_t size)
> -{
> -       update_pecoff_section_header_fields(section_name, offset, size, size, offset);
> +       /* Section header size field */
> +       memcpy(section->name, section_name, name_len);
> +
> +       put_unaligned_le32(section_memsz, &section->virtual_size);
> +       put_unaligned_le32(*mem_offset, &section->virtual_address);
> +       put_unaligned_le32(section_filesz, &section->raw_data_size);
> +       put_unaligned_le32(*file_offset, &section->data_addr);
> +       put_unaligned_le32(flags, &section->flags);
> +
> +       put_unaligned_le16(num_sections + 1, &hdr->sections);
> +
> +       *mem_offset += section_memsz;
> +       *file_offset += section_filesz;
>  }
>
> -static void update_pecoff_setup_and_reloc(unsigned int size)
> +#define BASE_RVA 0x1000
> +
> +static unsigned int text_rva;
> +
> +static unsigned int update_pecoff_sections(unsigned int setup_size,
> +                                          unsigned int file_size,
> +                                          unsigned int virt_size,
> +                                          unsigned int text_size)
>  {
> -       uint32_t setup_offset = SECTOR_SIZE;
> -       uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE - PECOFF_COMPAT_RESERVE;
> -#ifdef CONFIG_EFI_MIXED
> -       uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
> -#endif
> -       uint32_t setup_size = reloc_offset - setup_offset;
> +       /* First section starts at 512 byes, after PE header */
> +       unsigned int mem_offset = BASE_RVA, file_offset = SECTOR_SIZE;
> +       unsigned int compat_size, reloc_size;
> +       unsigned int bss_size, text_rva_diff, reloc_rva;
> +       pe_opt_hdr  *opt_hdr = get_pe_opt_header(buf);
> +       struct pe_hdr *hdr = get_pe_header(buf);
> +       struct data_dirent *base_reloc;
> +
> +       if (get_unaligned_le32(&hdr->sections))
> +               die("Some sections present in PE file");
>
> -       update_pecoff_section_header(".setup", setup_offset, setup_size);
> -       update_pecoff_section_header(".reloc", reloc_offset, PECOFF_RELOC_RESERVE);
> +       reloc_size = round_up(RELOC_SECTION_SIZE, FILE_ALIGNMENT);
> +       compat_size = round_up(COMPAT_SECTION_SIZE, FILE_ALIGNMENT);
> +       virt_size = round_up(virt_size, SECTION_ALIGNMENT);
>
>         /*
> -        * Modify .reloc section contents with a single entry. The
> -        * relocation is applied to offset 10 of the relocation section.
> +        * Update sections offsets.
> +        * NOTE: Order is important
>          */
> -       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, &buf[reloc_offset]);
> -       put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset + 4]);
>
> +       bss_size = virt_size - file_size;
> +
> +       emit_pecoff_section(".setup", setup_size - SECTOR_SIZE, 0,
> +                           &file_offset, &mem_offset, SCN_RO |
> +                           IMAGE_SCN_CNT_INITIALIZED_DATA);
> +
> +       text_rva_diff = mem_offset - file_offset;
> +       text_rva = mem_offset;
> +       emit_pecoff_section(".text", text_size, 0,
> +                           &file_offset, &mem_offset, SCN_RX |
> +                           IMAGE_SCN_CNT_CODE);
> +
> +       /* Check that kernel sections mapping is contiguous */
> +       if (text_rva_diff != mem_offset - file_offset)
> +               die("Kernel sections mapping is wrong: %#x != %#x",
> +                   mem_offset - file_offset, text_rva_diff);
> +
> +       emit_pecoff_section(".data", file_size - text_size, bss_size,
> +                           &file_offset, &mem_offset, SCN_RW |
> +                           IMAGE_SCN_CNT_INITIALIZED_DATA);
> +
> +       reloc_offset = file_offset;
> +       reloc_rva = mem_offset;
> +       emit_pecoff_section(".reloc", reloc_size, 0,
> +                           &file_offset, &mem_offset, SCN_RW |
> +                           IMAGE_SCN_CNT_INITIALIZED_DATA |
> +                           IMAGE_SCN_MEM_DISCARDABLE);
> +
> +       compat_offset = file_offset;
>  #ifdef CONFIG_EFI_MIXED
> -       update_pecoff_section_header(".compat", compat_offset, PECOFF_COMPAT_RESERVE);
> +       emit_pecoff_section(".comat", compat_size, 0,
> +                           &file_offset, &mem_offset, SCN_RW |
> +                           IMAGE_SCN_CNT_INITIALIZED_DATA |
> +                           IMAGE_SCN_MEM_DISCARDABLE);
> +#endif
>
> +       if (file_size + setup_size + reloc_size + compat_size != file_offset)
> +               die("file_size(%#x) != filesz(%#x)",
> +                   file_size + setup_size + reloc_size + compat_size, file_offset);
> +
> +       /* Size of code. */
> +       put_unaligned_le32(round_up(text_size, SECTION_ALIGNMENT), &opt_hdr->text_size);
>         /*
> -        * Put the IA-32 machine type (0x14c) and the associated entry point
> -        * address in the .compat section, so loaders can figure out which other
> -        * execution modes this image supports.
> +        * Size of data.
> +        * Exclude text size and first sector, which contains PE header.
>          */
> -       buf[compat_offset] = 0x1;
> -       buf[compat_offset + 1] = 0x8;
> -       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset + 2]);
> -       put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset + 4]);
> -#endif
> -}
> +       put_unaligned_le32(mem_offset - round_up(text_size, SECTION_ALIGNMENT),
> +                          &opt_hdr->data_size);
>
> -static unsigned int update_pecoff_sections(unsigned int text_start, unsigned int text_sz,
> -                              unsigned int init_sz)
> -{
> -       unsigned int file_sz = text_start + text_sz;
> -       unsigned int bss_sz = init_sz - file_sz;
> -       pe_opt_hdr *hdr = get_pe_opt_header(buf);
> +       /* Size of image. */
> +       put_unaligned_le32(mem_offset, &opt_hdr->image_size);
>
>         /*
> -        * The PE/COFF loader may load the image at an address which is
> -        * misaligned with respect to the kernel_alignment field in the setup
> -        * header.
> -        *
> -        * In order to avoid relocating the kernel to correct the misalignment,
> -        * add slack to allow the buffer to be aligned within the declared size
> -        * of the image.
> +        * Address of entry point for PE/COFF executable
>          */
> -       bss_sz  += CONFIG_PHYSICAL_ALIGN;
> -       init_sz += CONFIG_PHYSICAL_ALIGN;
> +       put_unaligned_le32(text_rva + efi_pe_entry, &opt_hdr->entry_point);
>
>         /*
> -        * Size of code: Subtract the size of the first sector (512 bytes)
> -        * which includes the header.
> +        * BaseOfCode for PE/COFF executable
>          */
> -       put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz, &hdr->text_size);
> -
> -       /* Size of image */
> -       put_unaligned_le32(init_sz, &hdr->image_size);
> +       put_unaligned_le32(text_rva, &opt_hdr->code_base);
>
>         /*
> -        * Address of entry point for PE/COFF executable
> +        * Since we have generated .reloc section, we need to
> +        * fill-in Reloc directory
>          */
> -       put_unaligned_le32(text_start + efi_pe_entry, &hdr->entry_point);
> +       base_reloc = &get_data_dirs(buf)->base_relocations;
> +       put_unaligned_le32(reloc_rva, &base_reloc->virtual_address);
> +       put_unaligned_le32(RELOC_SECTION_SIZE, &base_reloc->size);
>
> -       update_pecoff_section_header_fields(".text", text_start, text_sz + bss_sz,
> -                                           text_sz, text_start);
> -
> -       return text_start + file_sz;
> +       return file_offset;
>  }
>
> -static int reserve_pecoff_reloc_section(int c)
> +static void generate_pecoff_section_data(uint8_t *output)
>  {
> -       /* Reserve 0x20 bytes for .reloc section */
> -       memset(buf+c, 0, PECOFF_RELOC_RESERVE);
> -       return PECOFF_RELOC_RESERVE;
> +       /*
> +        * Modify .reloc section contents with a two entries. The
> +        * relocation is applied to offset 10 of the relocation section.
> +        */
> +       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, &output[reloc_offset]);
> +       put_unaligned_le32(RELOC_SECTION_SIZE, &output[reloc_offset + 4]);
> +
> +#ifdef CONFIG_EFI_MIXED
> +       /*
> +        * Put the IA-32 machine type (0x14c) and the associated entry point
> +        * address in the .compat section, so loaders can figure out which other
> +        * execution modes this image supports.
> +        */
> +       output[compat_offset] = 0x1;
> +       output[compat_offset + 1] = 0x8;
> +       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &output[compat_offset + 2]);
> +       put_unaligned_le32(efi32_pe_entry + text_rva, &output[compat_offset + 4]);
> +#endif
>  }
>
>  static void efi_stub_update_defaults(void)
> @@ -407,26 +465,10 @@ static void efi_stub_entry_update(void)
>
>  #else
>
> -static inline void update_pecoff_setup_and_reloc(unsigned int size) {}
> -static inline void update_pecoff_text(unsigned int text_start,
> -                                     unsigned int file_sz,
> -                                     unsigned int init_sz) {}
> -static inline void efi_stub_update_defaults(void) {}
> -static inline void efi_stub_entry_update(void) {}
> +static void efi_stub_update_defaults(void) {}
>
> -static inline int reserve_pecoff_reloc_section(int c)
> -{
> -       return 0;
> -}
>  #endif /* CONFIG_EFI_STUB */
>
> -static int reserve_pecoff_compat_section(int c)
> -{
> -       /* Reserve 0x20 bytes for .compat section */
> -       memset(buf+c, 0, PECOFF_COMPAT_RESERVE);
> -       return PECOFF_COMPAT_RESERVE;
> -}
> -
>  /*
>   * Parse zoffset.h and find the entry points. We could just #include zoffset.h
>   * but that would mean tools/build would have to be rebuilt every time. It's
> @@ -456,6 +498,7 @@ static void parse_zoffset(char *fname)
>                 PARSE_ZOFS(p, efi32_pe_entry);
>                 PARSE_ZOFS(p, kernel_info);
>                 PARSE_ZOFS(p, startup_64);
> +               PARSE_ZOFS(p, _rodata);
>                 PARSE_ZOFS(p, _ehead);
>                 PARSE_ZOFS(p, _end);
>
> @@ -489,10 +532,6 @@ static unsigned int read_setup(char *path)
>
>         fclose(file);
>
> -       /* Reserve space for PE sections */
> -       file_size += reserve_pecoff_compat_section(file_size);
> -       file_size += reserve_pecoff_reloc_section(file_size);
> -
>         /* Pad unused space with zeros */
>
>         setup_size = round_up(file_size, SECTOR_SIZE);
> @@ -515,7 +554,6 @@ int main(int argc, char **argv)
>         size_t kern_file_size;
>         unsigned int setup_size;
>         unsigned int setup_sectors;
> -       unsigned int init_size;
>         unsigned int total_size;
>         unsigned int kern_size;
>         void *kernel;
> @@ -540,8 +578,7 @@ int main(int argc, char **argv)
>
>  #ifdef CONFIG_EFI_STUB
>         /* PE specification require 512-byte minimum section file alignment */
> -       kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
> -       update_pecoff_setup_and_reloc(setup_size);
> +       kern_size = round_up(kern_file_size + 4, FILE_ALIGNMENT);
>  #else
>         /* Number of 16-byte paragraphs, including space for a 4-byte CRC */
>         kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
> @@ -554,33 +591,12 @@ int main(int argc, char **argv)
>         /* Update kernel_info offset. */
>         put_unaligned_le32(kernel_info, &buf[0x268]);
>
> -       init_size = get_unaligned_le32(&buf[0x260]);
> -
>  #ifdef CONFIG_EFI_STUB
> -       /*
> -        * The decompression buffer will start at ImageBase. When relocating
> -        * the compressed kernel to its end, we must ensure that the head
> -        * section does not get overwritten.  The head section occupies
> -        * [i, i + _ehead), and the destination is [init_sz - _end, init_sz).
> -        *
> -        * At present these should never overlap, because 'i' is at most 32k
> -        * because of SETUP_SECT_MAX, '_ehead' is less than 1k, and the
> -        * calculation of INIT_SIZE in boot/header.S ensures that
> -        * 'init_sz - _end' is at least 64k.
> -        *
> -        * For future-proofing, increase init_sz if necessary.
> -        */
> -
> -       if (init_size - _end < setup_size + _ehead) {
> -               init_size = round_up(setup_size + _ehead + _end, SECTION_ALIGNMENT);
> -               put_unaligned_le32(init_size, &buf[0x260]);
> -       }
>
> -       total_size = update_pecoff_sections(setup_size, kern_size, init_size);
> +       total_size = update_pecoff_sections(setup_size, kern_size, _end, _rodata);
>
>         efi_stub_entry_update();
>  #else
> -       (void)init_size;
>         total_size = setup_size + kern_size;
>  #endif
>
> @@ -590,6 +606,10 @@ int main(int argc, char **argv)
>         memcpy(output + setup_size, kernel, kern_file_size);
>         memset(output + setup_size + kern_file_size, 0, kern_size - kern_file_size);
>
> +#ifdef CONFIG_EFI_STUB
> +       generate_pecoff_section_data(output);
> +#endif
> +
>         /* Calculate and write kernel checksum. */
>         crc = partial_crc32(output, total_size - 4, crc);
>         put_unaligned_le32(crc, &output[total_size - 4]);
> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
> index 1d1ab1911fd3..1f0a2e7075c3 100644
> --- a/drivers/firmware/efi/libstub/x86-stub.c
> +++ b/drivers/firmware/efi/libstub/x86-stub.c
> @@ -389,8 +389,11 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t handle,
>
>         hdr = &boot_params->hdr;
>
> -       /* Copy the setup header from the second sector to boot_params */
> -       memcpy(&hdr->jump, image_base + 512,
> +       /*
> +        * Copy the setup header from the second sector
> +        * (mapped to image_base + 0x1000) to boot_params
> +        */
> +       memcpy(&hdr->jump, image_base + 0x1000,
>                sizeof(struct setup_header) - offsetof(struct setup_header, jump));
>
>         /*
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes
  2022-12-15 12:38 ` [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes Evgeniy Baskov
@ 2023-03-10 15:20   ` Ard Biesheuvel
  2023-03-11 15:09     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 15:20 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Explicitly change sections memory attributes in efi_pe_entry in case
> of incorrect EFI implementations and to reduce access rights to
> compressed kernel blob. By default it is set executable due to
> restriction in maximum number of sections that can fit before zero
> page.
>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

I don't think we need this patch. Firmware that cares about W^X will
map the PE image with R-X for text/rodata and RW- for data/bss, which
is sufficient, and firmware that doesn't is a lost cause anyway.


> ---
>  drivers/firmware/efi/libstub/x86-stub.c | 54 +++++++++++++++++++++++++
>  1 file changed, 54 insertions(+)
>
> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
> index 1f0a2e7075c3..60697fcd8950 100644
> --- a/drivers/firmware/efi/libstub/x86-stub.c
> +++ b/drivers/firmware/efi/libstub/x86-stub.c
> @@ -27,6 +27,12 @@ const efi_dxe_services_table_t *efi_dxe_table;
>  u32 image_offset __section(".data");
>  static efi_loaded_image_t *image __section(".data");
>
> +extern char _head[], _ehead[];
> +extern char _compressed[], _ecompressed[];
> +extern char _text[], _etext[];
> +extern char _rodata[], _erodata[];
> +extern char _data[];
> +
>  static efi_status_t
>  preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct pci_setup_rom **__rom)
>  {
> @@ -343,6 +349,52 @@ void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
>                 asm("hlt");
>  }
>
> +
> +/*
> + * Manually setup memory protection attributes for each ELF section
> + * since we cannot do it properly by using PE sections.
> + */
> +static void setup_sections_memory_protection(unsigned long image_base)
> +{
> +#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
> +       efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
> +
> +       if (!efi_dxe_table ||
> +           efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
> +               efi_warn("Unable to locate EFI DXE services table\n");
> +               efi_dxe_table = NULL;
> +               return;
> +       }
> +
> +       /* .setup [image_base, _head] */
> +       efi_adjust_memory_range_protection(image_base,
> +                                          (unsigned long)_head - image_base,
> +                                          EFI_MEMORY_RO | EFI_MEMORY_XP);
> +       /* .head.text [_head, _ehead] */
> +       efi_adjust_memory_range_protection((unsigned long)_head,
> +                                          (unsigned long)_ehead - (unsigned long)_head,
> +                                          EFI_MEMORY_RO);
> +       /* .rodata..compressed [_compressed, _ecompressed] */
> +       efi_adjust_memory_range_protection((unsigned long)_compressed,
> +                                          (unsigned long)_ecompressed - (unsigned long)_compressed,
> +                                          EFI_MEMORY_RO | EFI_MEMORY_XP);
> +       /* .text [_text, _etext] */
> +       efi_adjust_memory_range_protection((unsigned long)_text,
> +                                          (unsigned long)_etext - (unsigned long)_text,
> +                                          EFI_MEMORY_RO);
> +       /* .rodata [_rodata, _erodata] */
> +       efi_adjust_memory_range_protection((unsigned long)_rodata,
> +                                          (unsigned long)_erodata - (unsigned long)_rodata,
> +                                          EFI_MEMORY_RO | EFI_MEMORY_XP);
> +       /* .data, .bss [_data, _end] */
> +       efi_adjust_memory_range_protection((unsigned long)_data,
> +                                          (unsigned long)_end - (unsigned long)_data,
> +                                          EFI_MEMORY_XP);
> +#else
> +       (void)image_base;
> +#endif
> +}
> +
>  void __noreturn efi_stub_entry(efi_handle_t handle,
>                                efi_system_table_t *sys_table_arg,
>                                struct boot_params *boot_params);
> @@ -687,6 +739,8 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
>                 efi_dxe_table = NULL;
>         }
>
> +       setup_sections_memory_protection(bzimage_addr - image_offset);
> +
>  #ifdef CONFIG_CMDLINE_BOOL
>         status = efi_parse_options(CONFIG_CMDLINE);
>         if (status != EFI_SUCCESS) {
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 23/26] efi/libstub: Use memory attribute protocol
  2022-12-15 12:38 ` [PATCH v4 23/26] efi/libstub: Use memory attribute protocol Evgeniy Baskov
@ 2023-03-10 16:13   ` Ard Biesheuvel
  2023-03-11 15:14     ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-10 16:13 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> Add EFI_MEMORY_ATTRIBUTE_PROTOCOL as preferred alternative to DXE
> services for changing memory attributes in the EFISTUB.
>
> Use DXE services only as a fallback in case aforementioned protocol
> is not supported by UEFI implementation.
>
> Move DXE services initialization code closer to the place they are used
> to match EFI_MEMORY_ATTRIBUTE_PROTOCOL initialization code.
>
> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> Tested-by: Peter Jones <pjones@redhat.com>
> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>

I'm not convinced about the use of the DXE services for this, and I
think we should replace this patch with changes that base all the new
protection code on the EFI memory attributes protocol only.

We introduced that DXE code to remove protections from memory that was
mapped read-only and/or non-executable, and described as such in the
GCD memory map.

Using it to manipulate restricted permissions like this is quite a
different thing, and sadly (at least in EDK2), the GCD system memory
map is not kept in sync with the updated permissions, i.e, the W^X
protections for loaded images and the NX protection for arbitrary page
allocations are both based on the PI CPU arch protocol, which
manipulates the page tables directly, but does not record the modified
attributes in the GCD or EFI memory maps, as this would result in
massive fragmentation and break lots of other things.

That means that, except for the specific use case for which we
introduced the DXE services calls, the only reliable way to figure out
what permission attributes a certain range of memory is using is the
EFI memory attributes protocol, and I don't think we should use
anything else for tightening down these protections.




> ---
>  drivers/firmware/efi/libstub/mem.c      | 168 ++++++++++++++++++------
>  drivers/firmware/efi/libstub/x86-stub.c |  17 ---
>  2 files changed, 128 insertions(+), 57 deletions(-)
>
> diff --git a/drivers/firmware/efi/libstub/mem.c b/drivers/firmware/efi/libstub/mem.c
> index 3e47e5931f04..07d54c88c62e 100644
> --- a/drivers/firmware/efi/libstub/mem.c
> +++ b/drivers/firmware/efi/libstub/mem.c
> @@ -5,6 +5,9 @@
>
>  #include "efistub.h"
>
> +const efi_dxe_services_table_t *efi_dxe_table;
> +efi_memory_attribute_protocol_t *efi_mem_attrib_proto;
> +
>  /**
>   * efi_get_memory_map() - get memory map
>   * @map:               pointer to memory map pointer to which to assign the
> @@ -129,66 +132,47 @@ void efi_free(unsigned long size, unsigned long addr)
>         efi_bs_call(free_pages, addr, nr_pages);
>  }
>
> -/**
> - * efi_adjust_memory_range_protection() - change memory range protection attributes
> - * @start:     memory range start address
> - * @size:      memory range size
> - *
> - * Actual memory range for which memory attributes are modified is
> - * the smallest ranged with start address and size aligned to EFI_PAGE_SIZE
> - * that includes [start, start + size].
> - *
> - * @return: status code
> - */
> -efi_status_t efi_adjust_memory_range_protection(unsigned long start,
> -                                               unsigned long size,
> -                                               unsigned long attributes)
> +static void retrieve_dxe_table(void)
> +{
> +       efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
> +       if (efi_dxe_table &&
> +           efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
> +               efi_warn("Ignoring DXE services table: invalid signature\n");
> +               efi_dxe_table = NULL;
> +       }
> +}
> +
> +static efi_status_t adjust_mem_attrib_dxe(efi_physical_addr_t rounded_start,
> +                                         efi_physical_addr_t rounded_end,
> +                                         unsigned long attributes)
>  {
>         efi_status_t status;
>         efi_gcd_memory_space_desc_t desc;
> -       efi_physical_addr_t end, next;
> -       efi_physical_addr_t rounded_start, rounded_end;
> +       efi_physical_addr_t end, next, start;
>         efi_physical_addr_t unprotect_start, unprotect_size;
>
> -       if (efi_dxe_table == NULL)
> -               return EFI_UNSUPPORTED;
> +       if (!efi_dxe_table) {
> +               retrieve_dxe_table();
>
> -       /*
> -        * This function should not be used to modify attributes
> -        * other than writable/executable.
> -        */
> -
> -       if ((attributes & ~(EFI_MEMORY_RO | EFI_MEMORY_XP)) != 0)
> -               return EFI_INVALID_PARAMETER;
> -
> -       /*
> -        * Disallow simultaniously executable and writable memory
> -        * to inforce W^X policy if direct extraction code is enabled.
> -        */
> -
> -       if ((attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0) {
> -               efi_warn("W^X violation at [%08lx,%08lx]\n",
> -                        (unsigned long)rounded_start,
> -                        (unsigned long)rounded_end);
> +               if (!efi_dxe_table)
> +                       return EFI_UNSUPPORTED;
>         }
>
> -       rounded_start = rounddown(start, EFI_PAGE_SIZE);
> -       rounded_end = roundup(start + size, EFI_PAGE_SIZE);
> -
>         /*
>          * Don't modify memory region attributes, they are
>          * already suitable, to lower the possibility to
>          * encounter firmware bugs.
>          */
>
> -       for (end = start + size; start < end; start = next) {
> +
> +       for (start = rounded_start, end = rounded_end; start < end; start = next) {
>
>                 status = efi_dxe_call(get_memory_space_descriptor,
>                                       start, &desc);
>
>                 if (status != EFI_SUCCESS) {
>                         efi_warn("Unable to get memory descriptor at %lx\n",
> -                                start);
> +                                (unsigned long)start);
>                         return status;
>                 }
>
> @@ -230,3 +214,107 @@ efi_status_t efi_adjust_memory_range_protection(unsigned long start,
>
>         return EFI_SUCCESS;
>  }
> +
> +static void retrieve_memory_attributes_proto(void)
> +{
> +       efi_status_t status;
> +       efi_guid_t guid = EFI_MEMORY_ATTRIBUTE_PROTOCOL_GUID;
> +
> +       status = efi_bs_call(locate_protocol, &guid, NULL,
> +                            (void **)&efi_mem_attrib_proto);
> +       if (status != EFI_SUCCESS)
> +               efi_mem_attrib_proto = NULL;
> +}
> +
> +/**
> + * efi_adjust_memory_range_protection() - change memory range protection attributes
> + * @start:     memory range start address
> + * @size:      memory range size
> + *
> + * Actual memory range for which memory attributes are modified is
> + * the smallest ranged with start address and size aligned to EFI_PAGE_SIZE
> + * that includes [start, start + size].
> + *
> + * This function first attempts to use EFI_MEMORY_ATTRIBUTE_PROTOCOL,
> + * that is a part of UEFI Specification since version 2.10.
> + * If the protocol is unavailable it falls back to DXE services functions.
> + *
> + * @return: status code
> + */
> +efi_status_t efi_adjust_memory_range_protection(unsigned long start,
> +                                               unsigned long size,
> +                                               unsigned long attributes)
> +{
> +       efi_status_t status;
> +       efi_physical_addr_t rounded_start, rounded_end;
> +       unsigned long attr_clear;
> +
> +       /*
> +        * This function should not be used to modify attributes
> +        * other than writable/executable.
> +        */
> +
> +       if ((attributes & ~(EFI_MEMORY_RO | EFI_MEMORY_XP)) != 0)
> +               return EFI_INVALID_PARAMETER;
> +
> +       /*
> +        * Warn if requested to make memory simultaneously
> +        * executable and writable to enforce W^X policy.
> +        */
> +
> +       if ((attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0) {
> +               efi_warn("W^X violation at  [%08lx,%08lx]",
> +                        (unsigned long)rounded_start,
> +                        (unsigned long)rounded_end);
> +       }
> +
> +       rounded_start = rounddown(start, EFI_PAGE_SIZE);
> +       rounded_end = roundup(start + size, EFI_PAGE_SIZE);
> +
> +       if (!efi_mem_attrib_proto) {
> +               retrieve_memory_attributes_proto();
> +
> +               /* Fall back to DXE services if unsupported */
> +               if (!efi_mem_attrib_proto) {
> +                       return adjust_mem_attrib_dxe(rounded_start,
> +                                                    rounded_end,
> +                                                    attributes);
> +               }
> +       }
> +
> +       /*
> +        * Unlike DXE services functions, EFI_MEMORY_ATTRIBUTE_PROTOCOL
> +        * does not clear unset protection bit, so it needs to be cleared
> +        * explcitly
> +        */
> +
> +       attr_clear = ~attributes &
> +                    (EFI_MEMORY_RO | EFI_MEMORY_XP | EFI_MEMORY_RP);
> +
> +       status = efi_call_proto(efi_mem_attrib_proto,
> +                               clear_memory_attributes,
> +                               rounded_start,
> +                               rounded_end - rounded_start,
> +                               attr_clear);
> +       if (status != EFI_SUCCESS) {
> +               efi_warn("Failed to clear memory attributes at [%08lx,%08lx]: %lx",
> +                        (unsigned long)rounded_start,
> +                        (unsigned long)rounded_end,
> +                        status);
> +               return status;
> +       }
> +
> +       status = efi_call_proto(efi_mem_attrib_proto,
> +                               set_memory_attributes,
> +                               rounded_start,
> +                               rounded_end - rounded_start,
> +                               attributes);
> +       if (status != EFI_SUCCESS) {
> +               efi_warn("Failed to set memory attributes at [%08lx,%08lx]: %lx",
> +                        (unsigned long)rounded_start,
> +                        (unsigned long)rounded_end,
> +                        status);
> +       }
> +
> +       return status;
> +}
> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
> index 60697fcd8950..06a62b121521 100644
> --- a/drivers/firmware/efi/libstub/x86-stub.c
> +++ b/drivers/firmware/efi/libstub/x86-stub.c
> @@ -23,7 +23,6 @@
>  #define MAXMEM_X86_64_4LEVEL (1ull << 46)
>
>  const efi_system_table_t *efi_system_table;
> -const efi_dxe_services_table_t *efi_dxe_table;
>  u32 image_offset __section(".data");
>  static efi_loaded_image_t *image __section(".data");
>
> @@ -357,15 +356,6 @@ void __noreturn efi_exit(efi_handle_t handle, efi_status_t status)
>  static void setup_sections_memory_protection(unsigned long image_base)
>  {
>  #ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
> -       efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
> -
> -       if (!efi_dxe_table ||
> -           efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
> -               efi_warn("Unable to locate EFI DXE services table\n");
> -               efi_dxe_table = NULL;
> -               return;
> -       }
> -
>         /* .setup [image_base, _head] */
>         efi_adjust_memory_range_protection(image_base,
>                                            (unsigned long)_head - image_base,
> @@ -732,13 +722,6 @@ asmlinkage unsigned long efi_main(efi_handle_t handle,
>         if (efi_system_table->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE)
>                 efi_exit(handle, EFI_INVALID_PARAMETER);
>
> -       efi_dxe_table = get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
> -       if (efi_dxe_table &&
> -           efi_dxe_table->hdr.signature != EFI_DXE_SERVICES_TABLE_SIGNATURE) {
> -               efi_warn("Ignoring DXE services table: invalid signature\n");
> -               efi_dxe_table = NULL;
> -       }
> -
>         setup_sections_memory_protection(bzimage_addr - image_offset);
>
>  #ifdef CONFIG_CMDLINE_BOOL
> --
> 2.37.4
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size
  2023-03-10 14:43   ` Ard Biesheuvel
@ 2023-03-11 14:30     ` Evgeniy Baskov
  2023-03-11 14:42       ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 14:30 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 17:43, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> To protect sections on page table level each section
>> needs to be aligned on page size (4KB).
>> 
>> Set sections alignment in linker script.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> ---
>>  arch/x86/boot/compressed/vmlinux.lds.S | 6 ++++++
>>  1 file changed, 6 insertions(+)
>> 
>> diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
>> b/arch/x86/boot/compressed/vmlinux.lds.S
>> index 112b2375d021..6be90f1a1198 100644
>> --- a/arch/x86/boot/compressed/vmlinux.lds.S
>> +++ b/arch/x86/boot/compressed/vmlinux.lds.S
>> @@ -27,21 +27,27 @@ SECTIONS
>>                 HEAD_TEXT
>>                 _ehead = . ;
>>         }
>> +       . = ALIGN(PAGE_SIZE);
>>         .rodata..compressed : {
>> +               _compressed = .;
>>                 *(.rodata..compressed)
> 
> Can you just move this bit into the rodata section below?

I don't think that easily possible, as the layout need
to stay compatible with in-place extraction for non-UEFI boot.
For that execution path the code in .head.text moves everything
behind it to the end of the extraction buffer and extraction
code overwrites compressed kernel blob progressively during
extraction. And that is why we have effectively have two code
sections...

> 
>> +               _ecompressed = .;
>>         }
>> +       . = ALIGN(PAGE_SIZE);
>>         .text : {
> 
> Please use
> 
> .text : ALIGN(PAGE_SIZE) {
> 
> which marks the section as being page aligned, rather than just being
> placed on a 4k boundary.

Will fix in v5.

> 
>>                 _text = .;      /* Text */
>>                 *(.text)
>>                 *(.text.*)
>>                 _etext = . ;
>>         }
>> +       . = ALIGN(PAGE_SIZE);
>>         .rodata : {
>>                 _rodata = . ;
>>                 *(.rodata)       /* read-only data */
>>                 *(.rodata.*)
>>                 _erodata = . ;
>>         }
>> +       . = ALIGN(PAGE_SIZE);
>>         .data : {
>>                 _data = . ;
>>                 *(.data)
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 02/26] x86/build: Remove RWX sections and align on 4KB
  2023-03-10 14:45   ` Ard Biesheuvel
@ 2023-03-11 14:31     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 14:31 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 17:45, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Avoid creating sections simultaneously writable and readable
>> to prepare for W^X implementation. Align sections on page size (4KB) 
>> to
>> allow protecting them in the page tables.
>> 
>> Split init code form ".init" segment into separate R_X ".inittext"
>> segment and make ".init" segment non-executable.
>> 
>> Also add these segments to x86_32 architecture for consistency.
>> Currently paging is disabled in x86_32 in compressed kernel, so
>> protection is not applied anyways, but .init code was incorrectly
>> placed in non-executable ".data" segment. This should not change
>> anything meaningful in memory layout now, but might be required in 
>> case
>> memory protection will also be implemented in compressed kernel for
>> x86_32.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> 
> Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
> 
> One nit: the series modifies both the decompressor linker script and
> the core kernel one, so please make it very explicit in the commit log
> which one is being modified, and why it matters for this particular
> context.
> 

Thanks! I'll amend the log.
> 
>> ---
>>  arch/x86/kernel/vmlinux.lds.S | 15 ++++++++-------
>>  1 file changed, 8 insertions(+), 7 deletions(-)
>> 
>> diff --git a/arch/x86/kernel/vmlinux.lds.S 
>> b/arch/x86/kernel/vmlinux.lds.S
>> index 2e0ee14229bf..2e56d694c491 100644
>> --- a/arch/x86/kernel/vmlinux.lds.S
>> +++ b/arch/x86/kernel/vmlinux.lds.S
>> @@ -102,12 +102,11 @@ jiffies = jiffies_64;
>>  PHDRS {
>>         text PT_LOAD FLAGS(5);          /* R_E */
>>         data PT_LOAD FLAGS(6);          /* RW_ */
>> -#ifdef CONFIG_X86_64
>> -#ifdef CONFIG_SMP
>> +#if defined(CONFIG_X86_64) && defined(CONFIG_SMP)
>>         percpu PT_LOAD FLAGS(6);        /* RW_ */
>>  #endif
>> -       init PT_LOAD FLAGS(7);          /* RWE */
>> -#endif
>> +       inittext PT_LOAD FLAGS(5);      /* R_E */
>> +       init PT_LOAD FLAGS(6);          /* RW_ */
>>         note PT_NOTE FLAGS(0);          /* ___ */
>>  }
>> 
>> @@ -227,9 +226,10 @@ SECTIONS
>>  #endif
>> 
>>         INIT_TEXT_SECTION(PAGE_SIZE)
>> -#ifdef CONFIG_X86_64
>> -       :init
>> -#endif
>> +       :inittext
>> +
>> +       . = ALIGN(PAGE_SIZE);
>> +
>> 
>>         /*
>>          * Section for code used exclusively before alternatives are 
>> run. All
>> @@ -241,6 +241,7 @@ SECTIONS
>>         .altinstr_aux : AT(ADDR(.altinstr_aux) - LOAD_OFFSET) {
>>                 *(.altinstr_aux)
>>         }
>> +       :init
>> 
>>         INIT_DATA_SECTION(16)
>> 
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 12/26] x86/boot: Make kernel_add_identity_map() a pointer
  2023-03-10 14:52   ` Ard Biesheuvel
@ 2023-03-11 14:34     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 14:34 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 17:52, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Convert kernel_add_identity_map() into a function pointer to be able
>> to provide alternative implementations of this function. Required
>> to enable calling the code using this function from EFI environment.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> ---
>>  arch/x86/boot/compressed/ident_map_64.c |  7 ++++---
>>  arch/x86/boot/compressed/misc.c         | 24 ++++++++++++++++++++++++
>>  arch/x86/boot/compressed/misc.h         | 15 +++------------
>>  3 files changed, 31 insertions(+), 15 deletions(-)
>> 
>> diff --git a/arch/x86/boot/compressed/ident_map_64.c 
>> b/arch/x86/boot/compressed/ident_map_64.c
>> index ba5108c58a4e..1aee524d3c2b 100644
>> --- a/arch/x86/boot/compressed/ident_map_64.c
>> +++ b/arch/x86/boot/compressed/ident_map_64.c
>> @@ -92,9 +92,9 @@ bool has_nx; /* set in head_64.S */
>>  /*
>>   * Adds the specified range to the identity mappings.
>>   */
>> -unsigned long kernel_add_identity_map(unsigned long start,
>> -                                     unsigned long end,
>> -                                     unsigned int flags)
>> +unsigned long kernel_add_identity_map_(unsigned long start,
> 
> Please use a more discriminating name here - the trailing _ is rather
> hard to spot.

Got it. The kernel_add_identity_map_impl() will fit better, I think.

> 
>> +                                      unsigned long end,
>> +                                      unsigned int flags)
>>  {
>>         int ret;
>> 
>> @@ -142,6 +142,7 @@ void initialize_identity_maps(void *rmode)
>>         struct setup_data *sd;
>> 
>>         boot_params = rmode;
>> +       kernel_add_identity_map = kernel_add_identity_map_;
>> 
>>         /* Exclude the encryption mask from __PHYSICAL_MASK */
>>         physical_mask &= ~sme_me_mask;
>> diff --git a/arch/x86/boot/compressed/misc.c 
>> b/arch/x86/boot/compressed/misc.c
>> index aa4a22bc9cf9..c9c235d65d16 100644
>> --- a/arch/x86/boot/compressed/misc.c
>> +++ b/arch/x86/boot/compressed/misc.c
>> @@ -275,6 +275,22 @@ static void parse_elf(void *output, unsigned long 
>> output_len,
>>         free(phdrs);
>>  }
>> 
>> +/*
>> + * This points to actual implementation of mapping function
>> + * for current environment: either EFI API wrapper,
>> + * own implementation or dummy implementation below.
>> + */
>> +unsigned long (*kernel_add_identity_map)(unsigned long start,
>> +                                        unsigned long end,
>> +                                        unsigned int flags);
>> +
>> +static inline unsigned long kernel_add_identity_map_dummy(unsigned 
>> long start,
> 
> This function is never called, it only has its address taken, so the
> 'inline' makes no sense here.
> 

Indeed. I'll remove the inline.

>> +                                                         unsigned 
>> long end,
>> +                                                         unsigned int 
>> flags)
>> +{
>> +       return start;
>> +}
>> +
>>  /*
>>   * The compressed kernel image (ZO), has been moved so that its 
>> position
>>   * is against the end of the buffer used to hold the uncompressed 
>> kernel
>> @@ -312,6 +328,14 @@ asmlinkage __visible void *extract_kernel(void 
>> *rmode, memptr heap,
>> 
>>         init_default_io_ops();
>> 
>> +       /*
>> +        * On 64-bit this pointer is set during page table 
>> uninitialization,
> 
> initialization

Thanks!

> 
>> +        * but on 32-bit it remains uninitialized, since paging is 
>> disabled.
>> +        */
>> +       if (IS_ENABLED(CONFIG_X86_32))
>> +               kernel_add_identity_map = 
>> kernel_add_identity_map_dummy;
>> +
>> +
>>         /*
>>          * Detect TDX guest environment.
>>          *
>> diff --git a/arch/x86/boot/compressed/misc.h 
>> b/arch/x86/boot/compressed/misc.h
>> index 38d31bec062d..0076b2845b4b 100644
>> --- a/arch/x86/boot/compressed/misc.h
>> +++ b/arch/x86/boot/compressed/misc.h
>> @@ -180,18 +180,9 @@ static inline int 
>> count_immovable_mem_regions(void) { return 0; }
>>  #ifdef CONFIG_X86_5LEVEL
>>  extern unsigned int __pgtable_l5_enabled, pgdir_shift, ptrs_per_p4d;
>>  #endif
>> -#ifdef CONFIG_X86_64
>> -extern unsigned long kernel_add_identity_map(unsigned long start,
>> -                                            unsigned long end,
>> -                                            unsigned int flags);
>> -#else
>> -static inline unsigned long kernel_add_identity_map(unsigned long 
>> start,
>> -                                                   unsigned long end,
>> -                                                   unsigned int 
>> flags)
>> -{
>> -       return start;
>> -}
>> -#endif
>> +extern unsigned long (*kernel_add_identity_map)(unsigned long start,
>> +                                               unsigned long end,
>> +                                               unsigned int flags);
>>  /* Used by PAGE_KERN* macros: */
>>  extern pteval_t __default_kernel_pte_mask;
>> 
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 13/26] x86/boot: Split trampoline and pt init code
  2023-03-10 14:56   ` Ard Biesheuvel
@ 2023-03-11 14:37     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 14:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 17:56, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> When allocating trampoline from libstub trampoline allocation is
>> performed separately, so it needs to be skipped.
>> 
>> Split trampoline initialization and allocation code into two
>> functions to make them invokable separately.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> ---
>>  arch/x86/boot/compressed/pgtable_64.c | 73 
>> +++++++++++++++++----------
>>  1 file changed, 46 insertions(+), 27 deletions(-)
>> 
>> diff --git a/arch/x86/boot/compressed/pgtable_64.c 
>> b/arch/x86/boot/compressed/pgtable_64.c
>> index c7cf5a1059a8..1f7169248612 100644
>> --- a/arch/x86/boot/compressed/pgtable_64.c
>> +++ b/arch/x86/boot/compressed/pgtable_64.c
>> @@ -106,12 +106,8 @@ static unsigned long 
>> find_trampoline_placement(void)
>>         return bios_start - TRAMPOLINE_32BIT_SIZE;
>>  }
>> 
>> -struct paging_config paging_prepare(void *rmode)
>> +bool trampoline_pgtable_init(struct boot_params *boot_params)
>>  {
>> -       struct paging_config paging_config = {};
>> -
>> -       /* Initialize boot_params. Required for 
>> cmdline_find_option_bool(). */
>> -       boot_params = rmode;
>> 
>>         /*
>>          * Check if LA57 is desired and supported.
>> @@ -125,26 +121,10 @@ struct paging_config paging_prepare(void *rmode)
>>          *
>>          * That's substitute for boot_cpu_has() in early boot code.
>>          */
>> -       if (IS_ENABLED(CONFIG_X86_5LEVEL) &&
>> -                       !cmdline_find_option_bool("no5lvl") &&
>> -                       native_cpuid_eax(0) >= 7 &&
>> -                       (native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 
>> & 31)))) {
>> -               paging_config.l5_required = 1;
>> -       }
>> -
>> -       paging_config.trampoline_start = find_trampoline_placement();
>> -
>> -       trampoline_32bit = (unsigned long 
>> *)paging_config.trampoline_start;
>> -
>> -       /* Preserve trampoline memory */
>> -       memcpy(trampoline_save, trampoline_32bit, 
>> TRAMPOLINE_32BIT_SIZE);
>> -
>> -       /* Clear trampoline memory first */
>> -       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
>> -
>> -       /* Copy trampoline code in place */
>> -       memcpy(trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET / 
>> sizeof(unsigned long),
>> -                       &trampoline_32bit_src, 
>> TRAMPOLINE_32BIT_CODE_SIZE);
>> +       bool l5_required = IS_ENABLED(CONFIG_X86_5LEVEL) &&
>> +                          !cmdline_find_option_bool("no5lvl") &&
>> +                          native_cpuid_eax(0) >= 7 &&
>> +                          (native_cpuid_ecx(7) & (1 << 
>> (X86_FEATURE_LA57 & 31)));
>> 
>>         /*
>>          * The code below prepares page table in trampoline memory.
>> @@ -160,10 +140,10 @@ struct paging_config paging_prepare(void *rmode)
>>          * We are not going to use the page table in trampoline memory 
>> if we
>>          * are already in the desired paging mode.
>>          */
>> -       if (paging_config.l5_required == !!(native_read_cr4() & 
>> X86_CR4_LA57))
>> +       if (l5_required == !!(native_read_cr4() & X86_CR4_LA57))
>>                 goto out;
>> 
>> -       if (paging_config.l5_required) {
>> +       if (l5_required) {
>>                 /*
>>                  * For 4- to 5-level paging transition, set up current 
>> CR3 as
>>                  * the first and the only entry in a new top-level 
>> page table.
>> @@ -185,6 +165,45 @@ struct paging_config paging_prepare(void *rmode)
>>                        (void *)src, PAGE_SIZE);
>>         }
>> 
>> +out:
>> +       return l5_required;
>> +}
>> +
>> +struct paging_config paging_prepare(void *rmode)
>> +{
>> +       struct paging_config paging_config = {};
>> +       bool early_trampoline_alloc = 0;
> 
> false
> 
>> +
>> +       /* Initialize boot_params. Required for 
>> cmdline_find_option_bool(). */
>> +       boot_params = rmode;
>> +
>> +       /*
>> +        * We only need to find trampoline placement, if we have
>> +        * not already done it from libstub.
>> +        */
>> +
>> +       paging_config.trampoline_start = find_trampoline_placement();
>> +       trampoline_32bit = (unsigned long 
>> *)paging_config.trampoline_start;
>> +       early_trampoline_alloc = 0;
>> +
> 
> false again
> 
> And it never becomes true, nor is it used anywhere else. Can we get rid 
> of it?

Yes, probably it is just a leftover of the approach I used
before. I'll remove that.
> 
>> +       /*
>> +        * Preserve trampoline memory.
>> +        * When trampoline is located in memory
>> +        * owned by us, i.e. allocated in EFISTUB,
>> +        * we don't care about previous contents
>> +        * of this memory so copying can also be skipped.
> 
> Can you please reflow comments so they takes up fewer lines?
> 

Will fix.

>> +        */
>> +       memcpy(trampoline_save, trampoline_32bit, 
>> TRAMPOLINE_32BIT_SIZE);
>> +
>> +       /* Clear trampoline memory first */
>> +       memset(trampoline_32bit, 0, TRAMPOLINE_32BIT_SIZE);
>> +
>> +       /* Copy trampoline code in place */
>> +       memcpy(trampoline_32bit + TRAMPOLINE_32BIT_CODE_OFFSET / 
>> sizeof(unsigned long),
>> +                       &trampoline_32bit_src, 
>> TRAMPOLINE_32BIT_CODE_SIZE);
>> +
>> +       paging_config.l5_required = 
>> trampoline_pgtable_init(boot_params);
>> +
>>  out:
>>         return paging_config;
>>  }
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size
  2023-03-11 14:30     ` Evgeniy Baskov
@ 2023-03-11 14:42       ` Ard Biesheuvel
  0 siblings, 0 replies; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-11 14:42 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Sat, 11 Mar 2023 at 15:30, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-10 17:43, Ard Biesheuvel wrote:
> > On Thu, 15 Dec 2022 at 13:38, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> To protect sections on page table level each section
> >> needs to be aligned on page size (4KB).
> >>
> >> Set sections alignment in linker script.
> >>
> >> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >> ---
> >>  arch/x86/boot/compressed/vmlinux.lds.S | 6 ++++++
> >>  1 file changed, 6 insertions(+)
> >>
> >> diff --git a/arch/x86/boot/compressed/vmlinux.lds.S
> >> b/arch/x86/boot/compressed/vmlinux.lds.S
> >> index 112b2375d021..6be90f1a1198 100644
> >> --- a/arch/x86/boot/compressed/vmlinux.lds.S
> >> +++ b/arch/x86/boot/compressed/vmlinux.lds.S
> >> @@ -27,21 +27,27 @@ SECTIONS
> >>                 HEAD_TEXT
> >>                 _ehead = . ;
> >>         }
> >> +       . = ALIGN(PAGE_SIZE);
> >>         .rodata..compressed : {
> >> +               _compressed = .;
> >>                 *(.rodata..compressed)
> >
> > Can you just move this bit into the rodata section below?
>
> I don't think that easily possible, as the layout need
> to stay compatible with in-place extraction for non-UEFI boot.
> For that execution path the code in .head.text moves everything
> behind it to the end of the extraction buffer and extraction
> code overwrites compressed kernel blob progressively during
> extraction. And that is why we have effectively have two code
> sections...
>

A right - thanks for explaining that to me.

So in the end, I think it doesn't matter in any case if we just stick
to a single .text section with R-X attributes and a single .data
section with RW- attributes.


> >
> >> +               _ecompressed = .;
> >>         }
> >> +       . = ALIGN(PAGE_SIZE);
> >>         .text : {
> >
> > Please use
> >
> > .text : ALIGN(PAGE_SIZE) {
> >
> > which marks the section as being page aligned, rather than just being
> > placed on a 4k boundary.
>
> Will fix in v5.
>
> >
> >>                 _text = .;      /* Text */
> >>                 *(.text)
> >>                 *(.text.*)
> >>                 _etext = . ;
> >>         }
> >> +       . = ALIGN(PAGE_SIZE);
> >>         .rodata : {
> >>                 _rodata = . ;
> >>                 *(.rodata)       /* read-only data */
> >>                 *(.rodata.*)
> >>                 _erodata = . ;
> >>         }
> >> +       . = ALIGN(PAGE_SIZE);
> >>         .data : {
> >>                 _data = . ;
> >>                 *(.data)
> >> --
> >> 2.37.4
> >>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub
  2023-03-10 14:59   ` Ard Biesheuvel
@ 2023-03-11 14:49     ` Evgeniy Baskov
  2023-03-11 17:27       ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 14:49 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 17:59, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> This is required to fit more sections in PE section tables,
>> since its size is restricted by zero page located at specific offset
>> after the PE header.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> 
> I'd prefer to rip this out altogether.
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?id=9510f6f04f579b9a3f54ad762c75ab2d905e37d8

Sounds great! Can I replace this patch with yours in v5?

> 
> (and refer to the other thread in linux-efi@)

Which thread exactly? The one about the removal of
real-mode code?

> 
>> ---
>>  arch/x86/boot/header.S | 14 ++++++--------
>>  1 file changed, 6 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
>> index 9338c68e7413..9fec80bc504b 100644
>> --- a/arch/x86/boot/header.S
>> +++ b/arch/x86/boot/header.S
>> @@ -59,17 +59,16 @@ start2:
>>         cld
>> 
>>         movw    $bugger_off_msg, %si
>> +       movw    $bugger_off_msg_size, %cx
>> 
>>  msg_loop:
>>         lodsb
>> -       andb    %al, %al
>> -       jz      bs_die
>>         movb    $0xe, %ah
>>         movw    $7, %bx
>>         int     $0x10
>> -       jmp     msg_loop
>> +       decw    %cx
>> +       jnz     msg_loop
>> 
>> -bs_die:
>>         # Allow the user to press a key, then reboot
>>         xorw    %ax, %ax
>>         int     $0x16
>> @@ -90,10 +89,9 @@ bs_die:
>> 
>>         .section ".bsdata", "a"
>>  bugger_off_msg:
>> -       .ascii  "Use a boot loader.\r\n"
>> -       .ascii  "\n"
>> -       .ascii  "Remove disk and press any key to reboot...\r\n"
>> -       .byte   0
>> +       .ascii  "Use a boot loader. "
>> +       .ascii  "Press a key to reboot"
>> +       .set    bugger_off_msg_size, . - bugger_off_msg
>> 
>>  #ifdef CONFIG_EFI_STUB
>>  pe_header:
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 20/26] x86/build: Make generated PE more spec compliant
  2023-03-10 15:17   ` Ard Biesheuvel
@ 2023-03-11 15:02     ` Evgeniy Baskov
  2023-03-11 17:31       ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 15:02 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 18:17, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Currently kernel image is not fully compliant PE image, so it may
>> fail to boot with stricter implementations of UEFI PE loaders.
>> 
>> Set minimal alignments and sizes specified by PE documentation [1]
>> referenced by UEFI specification [2]. Align PE header to 8 bytes.
>> 
>> Generate PE sections dynamically. This simplifies code, since with
>> current implementation all of the sections needs to be defined in
>> header.S, where most section header fields do not hold valid values,
>> except for their names. Before the change, it also held flags,
>> but now flags depend on kernel configuration and it is simpler
>> to set them from build.c too.
>> 
>> Setup sections protection. Since we cannot fit every needed section,
>> set a part of protection flags dynamically during initialization.
>> This step is omitted if CONFIG_EFI_DXE_MEM_ATTRIBUTES is not set.
>> 
>> [1] 
>> https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
>> [2] 
>> https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf
>> 
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> 
> I would prefer it if we didn't rewrite the build tool this way.
> 
> Having the sections in header.S in the order they appear in the binary
> is rather useful, and I don't think we should manipulate the section
> flags based on whether CONFIG_DXE_MEM_ATTRIBUTES is set. I also don't
> think we need more than .text / .,data (as discussed in the other
> thread on linux-efi@)
> 
> Furthermore, I had a look at the audk PE loader [0], and I think it is
> being overly pedantic.
> 
> The PE/COFF spec does not require that all sections are virtually
> contiguous, and it does not require that the file content is
> completely covered by either the header or by a section.
> 
> So what I would prefer to do is the following:
> 
> Sections:
> Idx Name          Size     VMA              Type
>   0 .reloc        00000200 0000000000002000 DATA
>   1 .compat       00000200 0000000000003000 DATA
>   2 .text         00bee000 0000000000004000 TEXT
>   3 .data         00002200 0000000000bf2000 DATA
> 
> using 4k section alignment and 512 byte file alignment, and a header
> size of 0x200 as before (This requires my patch that allows the setup
> header to remain unmapped when running the stub [1])
> 
> The reloc and compat payloads are placed at the end of the setup data
> as before, but increased in size to 512 bytes each, and then mapped
> non-1:1 into the RVA space.
> 
> This works happily with both the existing PE loader as well as the
> audk one, but with the pedantic flags disabled.
> 

This makes sense. I'll change this patch to use this layout and
to keep sections in headers.S before sending v5. (and I guess I'll
make the compressed kernel a part of .text). I have a few questions
though:

This layout assumes having the local copy of the bootparams as
in your RFC patches, right?

Can I keep the .rodata -- 5th section fits in the section table
without much work?

Also, why .reloc is at offset 0x2000 and not just 0x1000, is there
anything important I am missing? I understand that is cannot be 0
and should be aligned on page size, but nothing else comes to my
mind...

Thanks!
> 
> 
> [0] https://github.com/acidanthera/audk
> [1] 
> https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?id=84412899c16c65af13dac305aa01a5a85e08c69e
> 
>> ---
>>  arch/x86/boot/Makefile                  |   2 +-
>>  arch/x86/boot/header.S                  |  96 +--------
>>  arch/x86/boot/tools/build.c             | 270 
>> +++++++++++++-----------
>>  drivers/firmware/efi/libstub/x86-stub.c |   7 +-
>>  4 files changed, 161 insertions(+), 214 deletions(-)
>> 
>> diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
>> index 9e38ffaadb5d..bed78c82238e 100644
>> --- a/arch/x86/boot/Makefile
>> +++ b/arch/x86/boot/Makefile
>> @@ -91,7 +91,7 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
>> 
>>  SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
>> 
>> -sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [a-zA-Z] 
>> \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|efi32_pe_entry\|input_data\|kernel_info\|_end\|_ehead\|_text\|z_.*\)$$/\#define 
>> ZO_\2 0x\1/p'
>> +sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [a-zA-Z] 
>> \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|efi32_pe_entry\|input_data\|kernel_info\|_end\|_ehead\|_text\|_rodata\|z_.*\)$$/\#define 
>> ZO_\2 0x\1/p'
>> 
>>  quiet_cmd_zoffset = ZOFFSET $@
>>        cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
>> diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
>> index 9fec80bc504b..07e31ddb074f 100644
>> --- a/arch/x86/boot/header.S
>> +++ b/arch/x86/boot/header.S
>> @@ -94,6 +94,7 @@ bugger_off_msg:
>>         .set    bugger_off_msg_size, . - bugger_off_msg
>> 
>>  #ifdef CONFIG_EFI_STUB
>> +       .align 8
>>  pe_header:
>>         .long   PE_MAGIC
>> 
>> @@ -107,7 +108,7 @@ coff_header:
>>         .set    pe_opt_magic, PE_OPT_MAGIC_PE32PLUS
>>         .word   IMAGE_FILE_MACHINE_AMD64
>>  #endif
>> -       .word   section_count                   # nr_sections
>> +       .word   0                               # nr_sections
>>         .long   0                               # TimeDateStamp
>>         .long   0                               # PointerToSymbolTable
>>         .long   1                               # NumberOfSymbols
>> @@ -131,7 +132,7 @@ optional_header:
>>         # Filled in by build.c
>>         .long   0x0000                          # AddressOfEntryPoint
>> 
>> -       .long   0x0200                          # BaseOfCode
>> +       .long   0x1000                          # BaseOfCode
>>  #ifdef CONFIG_X86_32
>>         .long   0                               # data
>>  #endif
>> @@ -144,8 +145,8 @@ extra_header_fields:
>>  #else
>>         .quad   image_base                      # ImageBase
>>  #endif
>> -       .long   0x20                            # SectionAlignment
>> -       .long   0x20                            # FileAlignment
>> +       .long   0x1000                          # SectionAlignment
>> +       .long   0x200                           # FileAlignment
>>         .word   0                               # 
>> MajorOperatingSystemVersion
>>         .word   0                               # 
>> MinorOperatingSystemVersion
>>         .word   LINUX_EFISTUB_MAJOR_VERSION     # MajorImageVersion
>> @@ -188,91 +189,14 @@ extra_header_fields:
>>         .quad   0                               # CertificationTable
>>         .quad   0                               # BaseRelocationTable
>> 
>> -       # Section table
>> -section_table:
>> -       #
>> -       # The offset & size fields are filled in by build.c.
>> -       #
>> -       .ascii  ".setup"
>> -       .byte   0
>> -       .byte   0
>> -       .long   0
>> -       .long   0x0                             # startup_{32,64}
>> -       .long   0                               # Size of initialized 
>> data
>> -                                               # on disk
>> -       .long   0x0                             # startup_{32,64}
>> -       .long   0                               # PointerToRelocations
>> -       .long   0                               # PointerToLineNumbers
>> -       .word   0                               # NumberOfRelocations
>> -       .word   0                               # NumberOfLineNumbers
>> -       .long   IMAGE_SCN_CNT_CODE              | \
>> -               IMAGE_SCN_MEM_READ              | \
>> -               IMAGE_SCN_MEM_EXECUTE           | \
>> -               IMAGE_SCN_ALIGN_16BYTES         # Characteristics
>> -
>> -       #
>> -       # The EFI application loader requires a relocation section
>> -       # because EFI applications must be relocatable. The .reloc
>> -       # offset & size fields are filled in by build.c.
>>         #
>> -       .ascii  ".reloc"
>> -       .byte   0
>> -       .byte   0
>> -       .long   0
>> -       .long   0
>> -       .long   0                               # SizeOfRawData
>> -       .long   0                               # PointerToRawData
>> -       .long   0                               # PointerToRelocations
>> -       .long   0                               # PointerToLineNumbers
>> -       .word   0                               # NumberOfRelocations
>> -       .word   0                               # NumberOfLineNumbers
>> -       .long   IMAGE_SCN_CNT_INITIALIZED_DATA  | \
>> -               IMAGE_SCN_MEM_READ              | \
>> -               IMAGE_SCN_MEM_DISCARDABLE       | \
>> -               IMAGE_SCN_ALIGN_1BYTES          # Characteristics
>> -
>> -#ifdef CONFIG_EFI_MIXED
>> -       #
>> -       # The offset & size fields are filled in by build.c.
>> +       # Section table
>> +       # It is generated by build.c and here we just need
>> +       # to reserve some space for sections
>>         #
>> -       .asciz  ".compat"
>> -       .long   0
>> -       .long   0x0
>> -       .long   0                               # Size of initialized 
>> data
>> -                                               # on disk
>> -       .long   0x0
>> -       .long   0                               # PointerToRelocations
>> -       .long   0                               # PointerToLineNumbers
>> -       .word   0                               # NumberOfRelocations
>> -       .word   0                               # NumberOfLineNumbers
>> -       .long   IMAGE_SCN_CNT_INITIALIZED_DATA  | \
>> -               IMAGE_SCN_MEM_READ              | \
>> -               IMAGE_SCN_MEM_DISCARDABLE       | \
>> -               IMAGE_SCN_ALIGN_1BYTES          # Characteristics
>> -#endif
>> +section_table:
>> +       .fill 40*5, 1, 0
>> 
>> -       #
>> -       # The offset & size fields are filled in by build.c.
>> -       #
>> -       .ascii  ".text"
>> -       .byte   0
>> -       .byte   0
>> -       .byte   0
>> -       .long   0
>> -       .long   0x0                             # startup_{32,64}
>> -       .long   0                               # Size of initialized 
>> data
>> -                                               # on disk
>> -       .long   0x0                             # startup_{32,64}
>> -       .long   0                               # PointerToRelocations
>> -       .long   0                               # PointerToLineNumbers
>> -       .word   0                               # NumberOfRelocations
>> -       .word   0                               # NumberOfLineNumbers
>> -       .long   IMAGE_SCN_CNT_CODE              | \
>> -               IMAGE_SCN_MEM_READ              | \
>> -               IMAGE_SCN_MEM_EXECUTE           | \
>> -               IMAGE_SCN_ALIGN_16BYTES         # Characteristics
>> -
>> -       .set    section_count, (. - section_table) / 40
>>  #endif /* CONFIG_EFI_STUB */
>> 
>>         # Kernel attributes; used by setup.  This is part 1 of the
>> diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
>> index fbc5315af032..ac6159b76a13 100644
>> --- a/arch/x86/boot/tools/build.c
>> +++ b/arch/x86/boot/tools/build.c
>> @@ -61,8 +61,10 @@ uint8_t buf[SETUP_SECT_MAX*SECTOR_SIZE];
>> 
>>  #ifdef CONFIG_EFI_MIXED
>>  #define PECOFF_COMPAT_RESERVE 0x20
>> +#define COMPAT_SECTION_SIZE 0x8
>>  #else
>>  #define PECOFF_COMPAT_RESERVE 0x0
>> +#define COMPAT_SECTION_SIZE 0x0
>>  #endif
>> 
>>  #define RELOC_SECTION_SIZE 10
>> @@ -117,6 +119,7 @@ static unsigned long efi_pe_entry;
>>  static unsigned long efi32_pe_entry;
>>  static unsigned long kernel_info;
>>  static unsigned long startup_64;
>> +static unsigned long _rodata;
>>  static unsigned long _ehead;
>>  static unsigned long _end;
>> 
>> @@ -258,122 +261,177 @@ static void *map_output_file(const char *path, 
>> size_t size)
>> 
>>  #ifdef CONFIG_EFI_STUB
>> 
>> -static void update_pecoff_section_header_fields(char *section_name, 
>> uint32_t vma,
>> -                                               uint32_t size, 
>> uint32_t datasz,
>> -                                               uint32_t offset)
>> +static unsigned int reloc_offset;
>> +static unsigned int compat_offset;
>> +
>> +#define MAX_SECTIONS 5
>> +
>> +static void emit_pecoff_section(const char *section_name, unsigned 
>> int size,
>> +                               unsigned int bss, unsigned int 
>> *file_offset,
>> +                               unsigned int *mem_offset, uint32_t 
>> flags)
>>  {
>> -       unsigned int pe_header;
>> +       unsigned int section_memsz, section_filesz;
>> +       unsigned int name_len;
>>         unsigned short num_sections;
>> +       struct pe_hdr *hdr = get_pe_header(buf);
>>         struct section_header *section;
>> 
>> -       struct pe_hdr *hdr = get_pe_header(buf);
>>         num_sections = get_unaligned_le16(&hdr->sections);
>> -       section = get_sections(buf);
>> +       if (num_sections >= MAX_SECTIONS)
>> +               die("Not enough space to generate all sections");
>> 
>> -       while (num_sections > 0) {
>> -               if (strncmp(section->name, section_name, 8) == 0) {
>> -                       /* section header size field */
>> -                       put_unaligned_le32(size, 
>> &section->virtual_size);
>> +       section = get_sections(buf) + num_sections;
>> 
>> -                       /* section header vma field */
>> -                       put_unaligned_le32(vma, 
>> &section->virtual_address);
>> +       if ((size & (FILE_ALIGNMENT - 1)) || (bss & (FILE_ALIGNMENT - 
>> 1)))
>> +               die("Section '%s' is improperly aligned", 
>> section_name);
>> 
>> -                       /* section header 'size of initialised data' 
>> field */
>> -                       put_unaligned_le32(datasz, 
>> &section->raw_data_size);
>> +       section_memsz = round_up(size + bss, SECTION_ALIGNMENT);
>> +       section_filesz = round_up(size, FILE_ALIGNMENT);
>> 
>> -                       /* section header 'file offset' field */
>> -                       put_unaligned_le32(offset, 
>> &section->data_addr);
>> +       /* Zero out all section fields */
>> +       memset(section, 0, sizeof(*section));
>> 
>> -                       break;
>> -               }
>> -               section++;
>> -               num_sections--;
>> -       }
>> -}
>> +       name_len = strlen(section_name);
>> +       if (name_len > sizeof(section->name))
>> +               name_len = sizeof(section_name);
>> 
>> -static void update_pecoff_section_header(char *section_name, uint32_t 
>> offset, uint32_t size)
>> -{
>> -       update_pecoff_section_header_fields(section_name, offset, 
>> size, size, offset);
>> +       /* Section header size field */
>> +       memcpy(section->name, section_name, name_len);
>> +
>> +       put_unaligned_le32(section_memsz, &section->virtual_size);
>> +       put_unaligned_le32(*mem_offset, &section->virtual_address);
>> +       put_unaligned_le32(section_filesz, &section->raw_data_size);
>> +       put_unaligned_le32(*file_offset, &section->data_addr);
>> +       put_unaligned_le32(flags, &section->flags);
>> +
>> +       put_unaligned_le16(num_sections + 1, &hdr->sections);
>> +
>> +       *mem_offset += section_memsz;
>> +       *file_offset += section_filesz;
>>  }
>> 
>> -static void update_pecoff_setup_and_reloc(unsigned int size)
>> +#define BASE_RVA 0x1000
>> +
>> +static unsigned int text_rva;
>> +
>> +static unsigned int update_pecoff_sections(unsigned int setup_size,
>> +                                          unsigned int file_size,
>> +                                          unsigned int virt_size,
>> +                                          unsigned int text_size)
>>  {
>> -       uint32_t setup_offset = SECTOR_SIZE;
>> -       uint32_t reloc_offset = size - PECOFF_RELOC_RESERVE - 
>> PECOFF_COMPAT_RESERVE;
>> -#ifdef CONFIG_EFI_MIXED
>> -       uint32_t compat_offset = reloc_offset + PECOFF_RELOC_RESERVE;
>> -#endif
>> -       uint32_t setup_size = reloc_offset - setup_offset;
>> +       /* First section starts at 512 byes, after PE header */
>> +       unsigned int mem_offset = BASE_RVA, file_offset = SECTOR_SIZE;
>> +       unsigned int compat_size, reloc_size;
>> +       unsigned int bss_size, text_rva_diff, reloc_rva;
>> +       pe_opt_hdr  *opt_hdr = get_pe_opt_header(buf);
>> +       struct pe_hdr *hdr = get_pe_header(buf);
>> +       struct data_dirent *base_reloc;
>> +
>> +       if (get_unaligned_le32(&hdr->sections))
>> +               die("Some sections present in PE file");
>> 
>> -       update_pecoff_section_header(".setup", setup_offset, 
>> setup_size);
>> -       update_pecoff_section_header(".reloc", reloc_offset, 
>> PECOFF_RELOC_RESERVE);
>> +       reloc_size = round_up(RELOC_SECTION_SIZE, FILE_ALIGNMENT);
>> +       compat_size = round_up(COMPAT_SECTION_SIZE, FILE_ALIGNMENT);
>> +       virt_size = round_up(virt_size, SECTION_ALIGNMENT);
>> 
>>         /*
>> -        * Modify .reloc section contents with a single entry. The
>> -        * relocation is applied to offset 10 of the relocation 
>> section.
>> +        * Update sections offsets.
>> +        * NOTE: Order is important
>>          */
>> -       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, 
>> &buf[reloc_offset]);
>> -       put_unaligned_le32(RELOC_SECTION_SIZE, &buf[reloc_offset + 
>> 4]);
>> 
>> +       bss_size = virt_size - file_size;
>> +
>> +       emit_pecoff_section(".setup", setup_size - SECTOR_SIZE, 0,
>> +                           &file_offset, &mem_offset, SCN_RO |
>> +                           IMAGE_SCN_CNT_INITIALIZED_DATA);
>> +
>> +       text_rva_diff = mem_offset - file_offset;
>> +       text_rva = mem_offset;
>> +       emit_pecoff_section(".text", text_size, 0,
>> +                           &file_offset, &mem_offset, SCN_RX |
>> +                           IMAGE_SCN_CNT_CODE);
>> +
>> +       /* Check that kernel sections mapping is contiguous */
>> +       if (text_rva_diff != mem_offset - file_offset)
>> +               die("Kernel sections mapping is wrong: %#x != %#x",
>> +                   mem_offset - file_offset, text_rva_diff);
>> +
>> +       emit_pecoff_section(".data", file_size - text_size, bss_size,
>> +                           &file_offset, &mem_offset, SCN_RW |
>> +                           IMAGE_SCN_CNT_INITIALIZED_DATA);
>> +
>> +       reloc_offset = file_offset;
>> +       reloc_rva = mem_offset;
>> +       emit_pecoff_section(".reloc", reloc_size, 0,
>> +                           &file_offset, &mem_offset, SCN_RW |
>> +                           IMAGE_SCN_CNT_INITIALIZED_DATA |
>> +                           IMAGE_SCN_MEM_DISCARDABLE);
>> +
>> +       compat_offset = file_offset;
>>  #ifdef CONFIG_EFI_MIXED
>> -       update_pecoff_section_header(".compat", compat_offset, 
>> PECOFF_COMPAT_RESERVE);
>> +       emit_pecoff_section(".comat", compat_size, 0,
>> +                           &file_offset, &mem_offset, SCN_RW |
>> +                           IMAGE_SCN_CNT_INITIALIZED_DATA |
>> +                           IMAGE_SCN_MEM_DISCARDABLE);
>> +#endif
>> 
>> +       if (file_size + setup_size + reloc_size + compat_size != 
>> file_offset)
>> +               die("file_size(%#x) != filesz(%#x)",
>> +                   file_size + setup_size + reloc_size + compat_size, 
>> file_offset);
>> +
>> +       /* Size of code. */
>> +       put_unaligned_le32(round_up(text_size, SECTION_ALIGNMENT), 
>> &opt_hdr->text_size);
>>         /*
>> -        * Put the IA-32 machine type (0x14c) and the associated entry 
>> point
>> -        * address in the .compat section, so loaders can figure out 
>> which other
>> -        * execution modes this image supports.
>> +        * Size of data.
>> +        * Exclude text size and first sector, which contains PE 
>> header.
>>          */
>> -       buf[compat_offset] = 0x1;
>> -       buf[compat_offset + 1] = 0x8;
>> -       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, &buf[compat_offset 
>> + 2]);
>> -       put_unaligned_le32(efi32_pe_entry + size, &buf[compat_offset + 
>> 4]);
>> -#endif
>> -}
>> +       put_unaligned_le32(mem_offset - round_up(text_size, 
>> SECTION_ALIGNMENT),
>> +                          &opt_hdr->data_size);
>> 
>> -static unsigned int update_pecoff_sections(unsigned int text_start, 
>> unsigned int text_sz,
>> -                              unsigned int init_sz)
>> -{
>> -       unsigned int file_sz = text_start + text_sz;
>> -       unsigned int bss_sz = init_sz - file_sz;
>> -       pe_opt_hdr *hdr = get_pe_opt_header(buf);
>> +       /* Size of image. */
>> +       put_unaligned_le32(mem_offset, &opt_hdr->image_size);
>> 
>>         /*
>> -        * The PE/COFF loader may load the image at an address which 
>> is
>> -        * misaligned with respect to the kernel_alignment field in 
>> the setup
>> -        * header.
>> -        *
>> -        * In order to avoid relocating the kernel to correct the 
>> misalignment,
>> -        * add slack to allow the buffer to be aligned within the 
>> declared size
>> -        * of the image.
>> +        * Address of entry point for PE/COFF executable
>>          */
>> -       bss_sz  += CONFIG_PHYSICAL_ALIGN;
>> -       init_sz += CONFIG_PHYSICAL_ALIGN;
>> +       put_unaligned_le32(text_rva + efi_pe_entry, 
>> &opt_hdr->entry_point);
>> 
>>         /*
>> -        * Size of code: Subtract the size of the first sector (512 
>> bytes)
>> -        * which includes the header.
>> +        * BaseOfCode for PE/COFF executable
>>          */
>> -       put_unaligned_le32(file_sz - SECTOR_SIZE + bss_sz, 
>> &hdr->text_size);
>> -
>> -       /* Size of image */
>> -       put_unaligned_le32(init_sz, &hdr->image_size);
>> +       put_unaligned_le32(text_rva, &opt_hdr->code_base);
>> 
>>         /*
>> -        * Address of entry point for PE/COFF executable
>> +        * Since we have generated .reloc section, we need to
>> +        * fill-in Reloc directory
>>          */
>> -       put_unaligned_le32(text_start + efi_pe_entry, 
>> &hdr->entry_point);
>> +       base_reloc = &get_data_dirs(buf)->base_relocations;
>> +       put_unaligned_le32(reloc_rva, &base_reloc->virtual_address);
>> +       put_unaligned_le32(RELOC_SECTION_SIZE, &base_reloc->size);
>> 
>> -       update_pecoff_section_header_fields(".text", text_start, 
>> text_sz + bss_sz,
>> -                                           text_sz, text_start);
>> -
>> -       return text_start + file_sz;
>> +       return file_offset;
>>  }
>> 
>> -static int reserve_pecoff_reloc_section(int c)
>> +static void generate_pecoff_section_data(uint8_t *output)
>>  {
>> -       /* Reserve 0x20 bytes for .reloc section */
>> -       memset(buf+c, 0, PECOFF_RELOC_RESERVE);
>> -       return PECOFF_RELOC_RESERVE;
>> +       /*
>> +        * Modify .reloc section contents with a two entries. The
>> +        * relocation is applied to offset 10 of the relocation 
>> section.
>> +        */
>> +       put_unaligned_le32(reloc_offset + RELOC_SECTION_SIZE, 
>> &output[reloc_offset]);
>> +       put_unaligned_le32(RELOC_SECTION_SIZE, &output[reloc_offset + 
>> 4]);
>> +
>> +#ifdef CONFIG_EFI_MIXED
>> +       /*
>> +        * Put the IA-32 machine type (0x14c) and the associated entry 
>> point
>> +        * address in the .compat section, so loaders can figure out 
>> which other
>> +        * execution modes this image supports.
>> +        */
>> +       output[compat_offset] = 0x1;
>> +       output[compat_offset + 1] = 0x8;
>> +       put_unaligned_le16(IMAGE_FILE_MACHINE_I386, 
>> &output[compat_offset + 2]);
>> +       put_unaligned_le32(efi32_pe_entry + text_rva, 
>> &output[compat_offset + 4]);
>> +#endif
>>  }
>> 
>>  static void efi_stub_update_defaults(void)
>> @@ -407,26 +465,10 @@ static void efi_stub_entry_update(void)
>> 
>>  #else
>> 
>> -static inline void update_pecoff_setup_and_reloc(unsigned int size) 
>> {}
>> -static inline void update_pecoff_text(unsigned int text_start,
>> -                                     unsigned int file_sz,
>> -                                     unsigned int init_sz) {}
>> -static inline void efi_stub_update_defaults(void) {}
>> -static inline void efi_stub_entry_update(void) {}
>> +static void efi_stub_update_defaults(void) {}
>> 
>> -static inline int reserve_pecoff_reloc_section(int c)
>> -{
>> -       return 0;
>> -}
>>  #endif /* CONFIG_EFI_STUB */
>> 
>> -static int reserve_pecoff_compat_section(int c)
>> -{
>> -       /* Reserve 0x20 bytes for .compat section */
>> -       memset(buf+c, 0, PECOFF_COMPAT_RESERVE);
>> -       return PECOFF_COMPAT_RESERVE;
>> -}
>> -
>>  /*
>>   * Parse zoffset.h and find the entry points. We could just #include 
>> zoffset.h
>>   * but that would mean tools/build would have to be rebuilt every 
>> time. It's
>> @@ -456,6 +498,7 @@ static void parse_zoffset(char *fname)
>>                 PARSE_ZOFS(p, efi32_pe_entry);
>>                 PARSE_ZOFS(p, kernel_info);
>>                 PARSE_ZOFS(p, startup_64);
>> +               PARSE_ZOFS(p, _rodata);
>>                 PARSE_ZOFS(p, _ehead);
>>                 PARSE_ZOFS(p, _end);
>> 
>> @@ -489,10 +532,6 @@ static unsigned int read_setup(char *path)
>> 
>>         fclose(file);
>> 
>> -       /* Reserve space for PE sections */
>> -       file_size += reserve_pecoff_compat_section(file_size);
>> -       file_size += reserve_pecoff_reloc_section(file_size);
>> -
>>         /* Pad unused space with zeros */
>> 
>>         setup_size = round_up(file_size, SECTOR_SIZE);
>> @@ -515,7 +554,6 @@ int main(int argc, char **argv)
>>         size_t kern_file_size;
>>         unsigned int setup_size;
>>         unsigned int setup_sectors;
>> -       unsigned int init_size;
>>         unsigned int total_size;
>>         unsigned int kern_size;
>>         void *kernel;
>> @@ -540,8 +578,7 @@ int main(int argc, char **argv)
>> 
>>  #ifdef CONFIG_EFI_STUB
>>         /* PE specification require 512-byte minimum section file 
>> alignment */
>> -       kern_size = round_up(kern_file_size + 4, SECTOR_SIZE);
>> -       update_pecoff_setup_and_reloc(setup_size);
>> +       kern_size = round_up(kern_file_size + 4, FILE_ALIGNMENT);
>>  #else
>>         /* Number of 16-byte paragraphs, including space for a 4-byte 
>> CRC */
>>         kern_size = round_up(kern_file_size + 4, PARAGRAPH_SIZE);
>> @@ -554,33 +591,12 @@ int main(int argc, char **argv)
>>         /* Update kernel_info offset. */
>>         put_unaligned_le32(kernel_info, &buf[0x268]);
>> 
>> -       init_size = get_unaligned_le32(&buf[0x260]);
>> -
>>  #ifdef CONFIG_EFI_STUB
>> -       /*
>> -        * The decompression buffer will start at ImageBase. When 
>> relocating
>> -        * the compressed kernel to its end, we must ensure that the 
>> head
>> -        * section does not get overwritten.  The head section 
>> occupies
>> -        * [i, i + _ehead), and the destination is [init_sz - _end, 
>> init_sz).
>> -        *
>> -        * At present these should never overlap, because 'i' is at 
>> most 32k
>> -        * because of SETUP_SECT_MAX, '_ehead' is less than 1k, and 
>> the
>> -        * calculation of INIT_SIZE in boot/header.S ensures that
>> -        * 'init_sz - _end' is at least 64k.
>> -        *
>> -        * For future-proofing, increase init_sz if necessary.
>> -        */
>> -
>> -       if (init_size - _end < setup_size + _ehead) {
>> -               init_size = round_up(setup_size + _ehead + _end, 
>> SECTION_ALIGNMENT);
>> -               put_unaligned_le32(init_size, &buf[0x260]);
>> -       }
>> 
>> -       total_size = update_pecoff_sections(setup_size, kern_size, 
>> init_size);
>> +       total_size = update_pecoff_sections(setup_size, kern_size, 
>> _end, _rodata);
>> 
>>         efi_stub_entry_update();
>>  #else
>> -       (void)init_size;
>>         total_size = setup_size + kern_size;
>>  #endif
>> 
>> @@ -590,6 +606,10 @@ int main(int argc, char **argv)
>>         memcpy(output + setup_size, kernel, kern_file_size);
>>         memset(output + setup_size + kern_file_size, 0, kern_size - 
>> kern_file_size);
>> 
>> +#ifdef CONFIG_EFI_STUB
>> +       generate_pecoff_section_data(output);
>> +#endif
>> +
>>         /* Calculate and write kernel checksum. */
>>         crc = partial_crc32(output, total_size - 4, crc);
>>         put_unaligned_le32(crc, &output[total_size - 4]);
>> diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
>> b/drivers/firmware/efi/libstub/x86-stub.c
>> index 1d1ab1911fd3..1f0a2e7075c3 100644
>> --- a/drivers/firmware/efi/libstub/x86-stub.c
>> +++ b/drivers/firmware/efi/libstub/x86-stub.c
>> @@ -389,8 +389,11 @@ efi_status_t __efiapi efi_pe_entry(efi_handle_t 
>> handle,
>> 
>>         hdr = &boot_params->hdr;
>> 
>> -       /* Copy the setup header from the second sector to boot_params 
>> */
>> -       memcpy(&hdr->jump, image_base + 512,
>> +       /*
>> +        * Copy the setup header from the second sector
>> +        * (mapped to image_base + 0x1000) to boot_params
>> +        */
>> +       memcpy(&hdr->jump, image_base + 0x1000,
>>                sizeof(struct setup_header) - offsetof(struct 
>> setup_header, jump));
>> 
>>         /*
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes
  2023-03-10 15:20   ` Ard Biesheuvel
@ 2023-03-11 15:09     ` Evgeniy Baskov
  2023-03-11 17:39       ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 15:09 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 18:20, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Explicitly change sections memory attributes in efi_pe_entry in case
>> of incorrect EFI implementations and to reduce access rights to
>> compressed kernel blob. By default it is set executable due to
>> restriction in maximum number of sections that can fit before zero
>> page.
>> 
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> 
> I don't think we need this patch. Firmware that cares about W^X will
> map the PE image with R-X for text/rodata and RW- for data/bss, which
> is sufficient, and firmware that doesn't is a lost cause anyway.

This patch were here mainly here to make .rodata non-executable and for
the UEFI handover protocol, for which attributes are usually not getting
applied.

Since the UEFI handover protocol is deprecated, I'll exclude patches 
from
v5 and maybe submit it separately modified to apply attributes only when
booting via this protocol.

> 
> 
>> ---
>>  drivers/firmware/efi/libstub/x86-stub.c | 54 
>> +++++++++++++++++++++++++
>>  1 file changed, 54 insertions(+)
>> 
>> diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
>> b/drivers/firmware/efi/libstub/x86-stub.c
>> index 1f0a2e7075c3..60697fcd8950 100644
>> --- a/drivers/firmware/efi/libstub/x86-stub.c
>> +++ b/drivers/firmware/efi/libstub/x86-stub.c
>> @@ -27,6 +27,12 @@ const efi_dxe_services_table_t *efi_dxe_table;
>>  u32 image_offset __section(".data");
>>  static efi_loaded_image_t *image __section(".data");
>> 
>> +extern char _head[], _ehead[];
>> +extern char _compressed[], _ecompressed[];
>> +extern char _text[], _etext[];
>> +extern char _rodata[], _erodata[];
>> +extern char _data[];
>> +
>>  static efi_status_t
>>  preserve_pci_rom_image(efi_pci_io_protocol_t *pci, struct 
>> pci_setup_rom **__rom)
>>  {
>> @@ -343,6 +349,52 @@ void __noreturn efi_exit(efi_handle_t handle, 
>> efi_status_t status)
>>                 asm("hlt");
>>  }
>> 
>> +
>> +/*
>> + * Manually setup memory protection attributes for each ELF section
>> + * since we cannot do it properly by using PE sections.
>> + */
>> +static void setup_sections_memory_protection(unsigned long 
>> image_base)
>> +{
>> +#ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
>> +       efi_dxe_table = 
>> get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
>> +
>> +       if (!efi_dxe_table ||
>> +           efi_dxe_table->hdr.signature != 
>> EFI_DXE_SERVICES_TABLE_SIGNATURE) {
>> +               efi_warn("Unable to locate EFI DXE services table\n");
>> +               efi_dxe_table = NULL;
>> +               return;
>> +       }
>> +
>> +       /* .setup [image_base, _head] */
>> +       efi_adjust_memory_range_protection(image_base,
>> +                                          (unsigned long)_head - 
>> image_base,
>> +                                          EFI_MEMORY_RO | 
>> EFI_MEMORY_XP);
>> +       /* .head.text [_head, _ehead] */
>> +       efi_adjust_memory_range_protection((unsigned long)_head,
>> +                                          (unsigned long)_ehead - 
>> (unsigned long)_head,
>> +                                          EFI_MEMORY_RO);
>> +       /* .rodata..compressed [_compressed, _ecompressed] */
>> +       efi_adjust_memory_range_protection((unsigned long)_compressed,
>> +                                          (unsigned long)_ecompressed 
>> - (unsigned long)_compressed,
>> +                                          EFI_MEMORY_RO | 
>> EFI_MEMORY_XP);
>> +       /* .text [_text, _etext] */
>> +       efi_adjust_memory_range_protection((unsigned long)_text,
>> +                                          (unsigned long)_etext - 
>> (unsigned long)_text,
>> +                                          EFI_MEMORY_RO);
>> +       /* .rodata [_rodata, _erodata] */
>> +       efi_adjust_memory_range_protection((unsigned long)_rodata,
>> +                                          (unsigned long)_erodata - 
>> (unsigned long)_rodata,
>> +                                          EFI_MEMORY_RO | 
>> EFI_MEMORY_XP);
>> +       /* .data, .bss [_data, _end] */
>> +       efi_adjust_memory_range_protection((unsigned long)_data,
>> +                                          (unsigned long)_end - 
>> (unsigned long)_data,
>> +                                          EFI_MEMORY_XP);
>> +#else
>> +       (void)image_base;
>> +#endif
>> +}
>> +
>>  void __noreturn efi_stub_entry(efi_handle_t handle,
>>                                efi_system_table_t *sys_table_arg,
>>                                struct boot_params *boot_params);
>> @@ -687,6 +739,8 @@ asmlinkage unsigned long efi_main(efi_handle_t 
>> handle,
>>                 efi_dxe_table = NULL;
>>         }
>> 
>> +       setup_sections_memory_protection(bzimage_addr - image_offset);
>> +
>>  #ifdef CONFIG_CMDLINE_BOOL
>>         status = efi_parse_options(CONFIG_CMDLINE);
>>         if (status != EFI_SUCCESS) {
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 23/26] efi/libstub: Use memory attribute protocol
  2023-03-10 16:13   ` Ard Biesheuvel
@ 2023-03-11 15:14     ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-11 15:14 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-10 19:13, Ard Biesheuvel wrote:
> On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> Add EFI_MEMORY_ATTRIBUTE_PROTOCOL as preferred alternative to DXE
>> services for changing memory attributes in the EFISTUB.
>> 
>> Use DXE services only as a fallback in case aforementioned protocol
>> is not supported by UEFI implementation.
>> 
>> Move DXE services initialization code closer to the place they are 
>> used
>> to match EFI_MEMORY_ATTRIBUTE_PROTOCOL initialization code.
>> 
>> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> Tested-by: Peter Jones <pjones@redhat.com>
>> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> 
> I'm not convinced about the use of the DXE services for this, and I
> think we should replace this patch with changes that base all the new
> protection code on the EFI memory attributes protocol only.
> 
> We introduced that DXE code to remove protections from memory that was
> mapped read-only and/or non-executable, and described as such in the
> GCD memory map.
> 
> Using it to manipulate restricted permissions like this is quite a
> different thing, and sadly (at least in EDK2), the GCD system memory
> map is not kept in sync with the updated permissions, i.e, the W^X
> protections for loaded images and the NX protection for arbitrary page
> allocations are both based on the PI CPU arch protocol, which
> manipulates the page tables directly, but does not record the modified
> attributes in the GCD or EFI memory maps, as this would result in
> massive fragmentation and break lots of other things.
> 
> That means that, except for the specific use case for which we
> introduced the DXE services calls, the only reliable way to figure out
> what permission attributes a certain range of memory is using is the
> EFI memory attributes protocol, and I don't think we should use
> anything else for tightening down these protections.
> 
> 

Makes sense. I'll change the patch to only widen the permissions with
DXE, so it aligns with the original intention. And only apply stricter
permissions with memory attribute protocol.

Thanks!

> 
> 
>> ---
>>  drivers/firmware/efi/libstub/mem.c      | 168 
>> ++++++++++++++++++------
>>  drivers/firmware/efi/libstub/x86-stub.c |  17 ---
>>  2 files changed, 128 insertions(+), 57 deletions(-)
>> 
>> diff --git a/drivers/firmware/efi/libstub/mem.c 
>> b/drivers/firmware/efi/libstub/mem.c
>> index 3e47e5931f04..07d54c88c62e 100644
>> --- a/drivers/firmware/efi/libstub/mem.c
>> +++ b/drivers/firmware/efi/libstub/mem.c
>> @@ -5,6 +5,9 @@
>> 
>>  #include "efistub.h"
>> 
>> +const efi_dxe_services_table_t *efi_dxe_table;
>> +efi_memory_attribute_protocol_t *efi_mem_attrib_proto;
>> +
>>  /**
>>   * efi_get_memory_map() - get memory map
>>   * @map:               pointer to memory map pointer to which to 
>> assign the
>> @@ -129,66 +132,47 @@ void efi_free(unsigned long size, unsigned long 
>> addr)
>>         efi_bs_call(free_pages, addr, nr_pages);
>>  }
>> 
>> -/**
>> - * efi_adjust_memory_range_protection() - change memory range 
>> protection attributes
>> - * @start:     memory range start address
>> - * @size:      memory range size
>> - *
>> - * Actual memory range for which memory attributes are modified is
>> - * the smallest ranged with start address and size aligned to 
>> EFI_PAGE_SIZE
>> - * that includes [start, start + size].
>> - *
>> - * @return: status code
>> - */
>> -efi_status_t efi_adjust_memory_range_protection(unsigned long start,
>> -                                               unsigned long size,
>> -                                               unsigned long 
>> attributes)
>> +static void retrieve_dxe_table(void)
>> +{
>> +       efi_dxe_table = 
>> get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
>> +       if (efi_dxe_table &&
>> +           efi_dxe_table->hdr.signature != 
>> EFI_DXE_SERVICES_TABLE_SIGNATURE) {
>> +               efi_warn("Ignoring DXE services table: invalid 
>> signature\n");
>> +               efi_dxe_table = NULL;
>> +       }
>> +}
>> +
>> +static efi_status_t adjust_mem_attrib_dxe(efi_physical_addr_t 
>> rounded_start,
>> +                                         efi_physical_addr_t 
>> rounded_end,
>> +                                         unsigned long attributes)
>>  {
>>         efi_status_t status;
>>         efi_gcd_memory_space_desc_t desc;
>> -       efi_physical_addr_t end, next;
>> -       efi_physical_addr_t rounded_start, rounded_end;
>> +       efi_physical_addr_t end, next, start;
>>         efi_physical_addr_t unprotect_start, unprotect_size;
>> 
>> -       if (efi_dxe_table == NULL)
>> -               return EFI_UNSUPPORTED;
>> +       if (!efi_dxe_table) {
>> +               retrieve_dxe_table();
>> 
>> -       /*
>> -        * This function should not be used to modify attributes
>> -        * other than writable/executable.
>> -        */
>> -
>> -       if ((attributes & ~(EFI_MEMORY_RO | EFI_MEMORY_XP)) != 0)
>> -               return EFI_INVALID_PARAMETER;
>> -
>> -       /*
>> -        * Disallow simultaniously executable and writable memory
>> -        * to inforce W^X policy if direct extraction code is enabled.
>> -        */
>> -
>> -       if ((attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0) {
>> -               efi_warn("W^X violation at [%08lx,%08lx]\n",
>> -                        (unsigned long)rounded_start,
>> -                        (unsigned long)rounded_end);
>> +               if (!efi_dxe_table)
>> +                       return EFI_UNSUPPORTED;
>>         }
>> 
>> -       rounded_start = rounddown(start, EFI_PAGE_SIZE);
>> -       rounded_end = roundup(start + size, EFI_PAGE_SIZE);
>> -
>>         /*
>>          * Don't modify memory region attributes, they are
>>          * already suitable, to lower the possibility to
>>          * encounter firmware bugs.
>>          */
>> 
>> -       for (end = start + size; start < end; start = next) {
>> +
>> +       for (start = rounded_start, end = rounded_end; start < end; 
>> start = next) {
>> 
>>                 status = efi_dxe_call(get_memory_space_descriptor,
>>                                       start, &desc);
>> 
>>                 if (status != EFI_SUCCESS) {
>>                         efi_warn("Unable to get memory descriptor at 
>> %lx\n",
>> -                                start);
>> +                                (unsigned long)start);
>>                         return status;
>>                 }
>> 
>> @@ -230,3 +214,107 @@ efi_status_t 
>> efi_adjust_memory_range_protection(unsigned long start,
>> 
>>         return EFI_SUCCESS;
>>  }
>> +
>> +static void retrieve_memory_attributes_proto(void)
>> +{
>> +       efi_status_t status;
>> +       efi_guid_t guid = EFI_MEMORY_ATTRIBUTE_PROTOCOL_GUID;
>> +
>> +       status = efi_bs_call(locate_protocol, &guid, NULL,
>> +                            (void **)&efi_mem_attrib_proto);
>> +       if (status != EFI_SUCCESS)
>> +               efi_mem_attrib_proto = NULL;
>> +}
>> +
>> +/**
>> + * efi_adjust_memory_range_protection() - change memory range 
>> protection attributes
>> + * @start:     memory range start address
>> + * @size:      memory range size
>> + *
>> + * Actual memory range for which memory attributes are modified is
>> + * the smallest ranged with start address and size aligned to 
>> EFI_PAGE_SIZE
>> + * that includes [start, start + size].
>> + *
>> + * This function first attempts to use EFI_MEMORY_ATTRIBUTE_PROTOCOL,
>> + * that is a part of UEFI Specification since version 2.10.
>> + * If the protocol is unavailable it falls back to DXE services 
>> functions.
>> + *
>> + * @return: status code
>> + */
>> +efi_status_t efi_adjust_memory_range_protection(unsigned long start,
>> +                                               unsigned long size,
>> +                                               unsigned long 
>> attributes)
>> +{
>> +       efi_status_t status;
>> +       efi_physical_addr_t rounded_start, rounded_end;
>> +       unsigned long attr_clear;
>> +
>> +       /*
>> +        * This function should not be used to modify attributes
>> +        * other than writable/executable.
>> +        */
>> +
>> +       if ((attributes & ~(EFI_MEMORY_RO | EFI_MEMORY_XP)) != 0)
>> +               return EFI_INVALID_PARAMETER;
>> +
>> +       /*
>> +        * Warn if requested to make memory simultaneously
>> +        * executable and writable to enforce W^X policy.
>> +        */
>> +
>> +       if ((attributes & (EFI_MEMORY_RO | EFI_MEMORY_XP)) == 0) {
>> +               efi_warn("W^X violation at  [%08lx,%08lx]",
>> +                        (unsigned long)rounded_start,
>> +                        (unsigned long)rounded_end);
>> +       }
>> +
>> +       rounded_start = rounddown(start, EFI_PAGE_SIZE);
>> +       rounded_end = roundup(start + size, EFI_PAGE_SIZE);
>> +
>> +       if (!efi_mem_attrib_proto) {
>> +               retrieve_memory_attributes_proto();
>> +
>> +               /* Fall back to DXE services if unsupported */
>> +               if (!efi_mem_attrib_proto) {
>> +                       return adjust_mem_attrib_dxe(rounded_start,
>> +                                                    rounded_end,
>> +                                                    attributes);
>> +               }
>> +       }
>> +
>> +       /*
>> +        * Unlike DXE services functions, 
>> EFI_MEMORY_ATTRIBUTE_PROTOCOL
>> +        * does not clear unset protection bit, so it needs to be 
>> cleared
>> +        * explcitly
>> +        */
>> +
>> +       attr_clear = ~attributes &
>> +                    (EFI_MEMORY_RO | EFI_MEMORY_XP | EFI_MEMORY_RP);
>> +
>> +       status = efi_call_proto(efi_mem_attrib_proto,
>> +                               clear_memory_attributes,
>> +                               rounded_start,
>> +                               rounded_end - rounded_start,
>> +                               attr_clear);
>> +       if (status != EFI_SUCCESS) {
>> +               efi_warn("Failed to clear memory attributes at 
>> [%08lx,%08lx]: %lx",
>> +                        (unsigned long)rounded_start,
>> +                        (unsigned long)rounded_end,
>> +                        status);
>> +               return status;
>> +       }
>> +
>> +       status = efi_call_proto(efi_mem_attrib_proto,
>> +                               set_memory_attributes,
>> +                               rounded_start,
>> +                               rounded_end - rounded_start,
>> +                               attributes);
>> +       if (status != EFI_SUCCESS) {
>> +               efi_warn("Failed to set memory attributes at 
>> [%08lx,%08lx]: %lx",
>> +                        (unsigned long)rounded_start,
>> +                        (unsigned long)rounded_end,
>> +                        status);
>> +       }
>> +
>> +       return status;
>> +}
>> diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
>> b/drivers/firmware/efi/libstub/x86-stub.c
>> index 60697fcd8950..06a62b121521 100644
>> --- a/drivers/firmware/efi/libstub/x86-stub.c
>> +++ b/drivers/firmware/efi/libstub/x86-stub.c
>> @@ -23,7 +23,6 @@
>>  #define MAXMEM_X86_64_4LEVEL (1ull << 46)
>> 
>>  const efi_system_table_t *efi_system_table;
>> -const efi_dxe_services_table_t *efi_dxe_table;
>>  u32 image_offset __section(".data");
>>  static efi_loaded_image_t *image __section(".data");
>> 
>> @@ -357,15 +356,6 @@ void __noreturn efi_exit(efi_handle_t handle, 
>> efi_status_t status)
>>  static void setup_sections_memory_protection(unsigned long 
>> image_base)
>>  {
>>  #ifdef CONFIG_EFI_DXE_MEM_ATTRIBUTES
>> -       efi_dxe_table = 
>> get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
>> -
>> -       if (!efi_dxe_table ||
>> -           efi_dxe_table->hdr.signature != 
>> EFI_DXE_SERVICES_TABLE_SIGNATURE) {
>> -               efi_warn("Unable to locate EFI DXE services table\n");
>> -               efi_dxe_table = NULL;
>> -               return;
>> -       }
>> -
>>         /* .setup [image_base, _head] */
>>         efi_adjust_memory_range_protection(image_base,
>>                                            (unsigned long)_head - 
>> image_base,
>> @@ -732,13 +722,6 @@ asmlinkage unsigned long efi_main(efi_handle_t 
>> handle,
>>         if (efi_system_table->hdr.signature != 
>> EFI_SYSTEM_TABLE_SIGNATURE)
>>                 efi_exit(handle, EFI_INVALID_PARAMETER);
>> 
>> -       efi_dxe_table = 
>> get_efi_config_table(EFI_DXE_SERVICES_TABLE_GUID);
>> -       if (efi_dxe_table &&
>> -           efi_dxe_table->hdr.signature != 
>> EFI_DXE_SERVICES_TABLE_SIGNATURE) {
>> -               efi_warn("Ignoring DXE services table: invalid 
>> signature\n");
>> -               efi_dxe_table = NULL;
>> -       }
>> -
>>         setup_sections_memory_protection(bzimage_addr - image_offset);
>> 
>>  #ifdef CONFIG_CMDLINE_BOOL
>> --
>> 2.37.4
>> 

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub
  2023-03-11 14:49     ` Evgeniy Baskov
@ 2023-03-11 17:27       ` Ard Biesheuvel
  2023-03-12 12:10         ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-11 17:27 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Sat, 11 Mar 2023 at 15:49, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-10 17:59, Ard Biesheuvel wrote:
> > On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> This is required to fit more sections in PE section tables,
> >> since its size is restricted by zero page located at specific offset
> >> after the PE header.
> >>
> >> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >
> > I'd prefer to rip this out altogether.
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?id=9510f6f04f579b9a3f54ad762c75ab2d905e37d8
>
> Sounds great! Can I replace this patch with yours in v5?
>

Of course.

> >
> > (and refer to the other thread in linux-efi@)
>
> Which thread exactly? The one about the removal of
> real-mode code?
>

Yes, this one

https://lore.kernel.org/linux-efi/20230308202209.2980947-1-ardb@kernel.org/

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 20/26] x86/build: Make generated PE more spec compliant
  2023-03-11 15:02     ` Evgeniy Baskov
@ 2023-03-11 17:31       ` Ard Biesheuvel
  2023-03-12 12:01         ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-11 17:31 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Sat, 11 Mar 2023 at 16:02, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-10 18:17, Ard Biesheuvel wrote:
> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> Currently kernel image is not fully compliant PE image, so it may
> >> fail to boot with stricter implementations of UEFI PE loaders.
> >>
> >> Set minimal alignments and sizes specified by PE documentation [1]
> >> referenced by UEFI specification [2]. Align PE header to 8 bytes.
> >>
> >> Generate PE sections dynamically. This simplifies code, since with
> >> current implementation all of the sections needs to be defined in
> >> header.S, where most section header fields do not hold valid values,
> >> except for their names. Before the change, it also held flags,
> >> but now flags depend on kernel configuration and it is simpler
> >> to set them from build.c too.
> >>
> >> Setup sections protection. Since we cannot fit every needed section,
> >> set a part of protection flags dynamically during initialization.
> >> This step is omitted if CONFIG_EFI_DXE_MEM_ATTRIBUTES is not set.
> >>
> >> [1]
> >> https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
> >> [2]
> >> https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf
> >>
> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >
> > I would prefer it if we didn't rewrite the build tool this way.
> >
> > Having the sections in header.S in the order they appear in the binary
> > is rather useful, and I don't think we should manipulate the section
> > flags based on whether CONFIG_DXE_MEM_ATTRIBUTES is set. I also don't
> > think we need more than .text / .,data (as discussed in the other
> > thread on linux-efi@)
> >
> > Furthermore, I had a look at the audk PE loader [0], and I think it is
> > being overly pedantic.
> >
> > The PE/COFF spec does not require that all sections are virtually
> > contiguous, and it does not require that the file content is
> > completely covered by either the header or by a section.
> >
> > So what I would prefer to do is the following:
> >
> > Sections:
> > Idx Name          Size     VMA              Type
> >   0 .reloc        00000200 0000000000002000 DATA
> >   1 .compat       00000200 0000000000003000 DATA
> >   2 .text         00bee000 0000000000004000 TEXT
> >   3 .data         00002200 0000000000bf2000 DATA
> >
> > using 4k section alignment and 512 byte file alignment, and a header
> > size of 0x200 as before (This requires my patch that allows the setup
> > header to remain unmapped when running the stub [1])
> >
> > The reloc and compat payloads are placed at the end of the setup data
> > as before, but increased in size to 512 bytes each, and then mapped
> > non-1:1 into the RVA space.
> >
> > This works happily with both the existing PE loader as well as the
> > audk one, but with the pedantic flags disabled.
> >
>
> This makes sense. I'll change this patch to use this layout and
> to keep sections in headers.S before sending v5. (and I guess I'll
> make the compressed kernel a part of .text). I have a few questions
> though:
>
> This layout assumes having the local copy of the bootparams as
> in your RFC patches, right?
>

Indeed. Otherwise, the setup header may not have been copied to memory
by the loader.

> Can I keep the .rodata -- 5th section fits in the section table
> without much work?
>

You could, but at least the current PE/COFF loader in EDK2 will map it
read/write, as it only distinguishes between executable sections and
non-executable sections.

> Also, why .reloc is at offset 0x2000 and not just 0x1000, is there
> anything important I am missing? I understand that is cannot be 0
> and should be aligned on page size, but nothing else comes to my
> mind...
>

That was just arbitrary, because the raw allocations of reloc and
compat are also allocated towards the end. But I guess starting at
0x1000 for .reloc makes more sense so feel free to change that.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes
  2023-03-11 15:09     ` Evgeniy Baskov
@ 2023-03-11 17:39       ` Ard Biesheuvel
  2023-03-12 12:10         ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-11 17:39 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Sat, 11 Mar 2023 at 16:09, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-10 18:20, Ard Biesheuvel wrote:
> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> Explicitly change sections memory attributes in efi_pe_entry in case
> >> of incorrect EFI implementations and to reduce access rights to
> >> compressed kernel blob. By default it is set executable due to
> >> restriction in maximum number of sections that can fit before zero
> >> page.
> >>
> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >
> > I don't think we need this patch. Firmware that cares about W^X will
> > map the PE image with R-X for text/rodata and RW- for data/bss, which
> > is sufficient, and firmware that doesn't is a lost cause anyway.
>
> This patch were here mainly here to make .rodata non-executable and for
> the UEFI handover protocol, for which attributes are usually not getting
> applied.
>
> Since the UEFI handover protocol is deprecated, I'll exclude patches
> from
> v5 and maybe submit it separately modified to apply attributes only when
> booting via this protocol.
>

I think the issue here is that loaders that use the UEFI handover
protocol use their own implementations of LoadImage/StartImage as
well, and some of those tend to do little more than copy the image
into memory and jump to the EFI handover protocol entry point, without
even accounting for the image size in memory or clearing the bss.

To be honest, even though I understand the reason these had to be
implemented, I'm a bit reluctant to cater for the needs of such
loaders, given that these are all downstream distro forks of GRUB
(with shim) with varying levels of adherence to the PE/COFF spec.

I'm happy to revisit this later if others feel this is important, but
for the moment, I'd prefer it if we could focus on making the x86
image work better with compliant loaders, which is what this series is
primarily about.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 20/26] x86/build: Make generated PE more spec compliant
  2023-03-11 17:31       ` Ard Biesheuvel
@ 2023-03-12 12:01         ` Evgeniy Baskov
  2023-03-12 13:09           ` Ard Biesheuvel
  0 siblings, 1 reply; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-12 12:01 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-11 20:31, Ard Biesheuvel wrote:
> On Sat, 11 Mar 2023 at 16:02, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> On 2023-03-10 18:17, Ard Biesheuvel wrote:
>> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> >>
>> >> Currently kernel image is not fully compliant PE image, so it may
>> >> fail to boot with stricter implementations of UEFI PE loaders.
>> >>
>> >> Set minimal alignments and sizes specified by PE documentation [1]
>> >> referenced by UEFI specification [2]. Align PE header to 8 bytes.
>> >>
>> >> Generate PE sections dynamically. This simplifies code, since with
>> >> current implementation all of the sections needs to be defined in
>> >> header.S, where most section header fields do not hold valid values,
>> >> except for their names. Before the change, it also held flags,
>> >> but now flags depend on kernel configuration and it is simpler
>> >> to set them from build.c too.
>> >>
>> >> Setup sections protection. Since we cannot fit every needed section,
>> >> set a part of protection flags dynamically during initialization.
>> >> This step is omitted if CONFIG_EFI_DXE_MEM_ATTRIBUTES is not set.
>> >>
>> >> [1]
>> >> https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
>> >> [2]
>> >> https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf
>> >>
>> >> Tested-by: Peter Jones <pjones@redhat.com>
>> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> >
>> > I would prefer it if we didn't rewrite the build tool this way.
>> >
>> > Having the sections in header.S in the order they appear in the binary
>> > is rather useful, and I don't think we should manipulate the section
>> > flags based on whether CONFIG_DXE_MEM_ATTRIBUTES is set. I also don't
>> > think we need more than .text / .,data (as discussed in the other
>> > thread on linux-efi@)
>> >
>> > Furthermore, I had a look at the audk PE loader [0], and I think it is
>> > being overly pedantic.
>> >
>> > The PE/COFF spec does not require that all sections are virtually
>> > contiguous, and it does not require that the file content is
>> > completely covered by either the header or by a section.
>> >
>> > So what I would prefer to do is the following:
>> >
>> > Sections:
>> > Idx Name          Size     VMA              Type
>> >   0 .reloc        00000200 0000000000002000 DATA
>> >   1 .compat       00000200 0000000000003000 DATA
>> >   2 .text         00bee000 0000000000004000 TEXT
>> >   3 .data         00002200 0000000000bf2000 DATA
>> >
>> > using 4k section alignment and 512 byte file alignment, and a header
>> > size of 0x200 as before (This requires my patch that allows the setup
>> > header to remain unmapped when running the stub [1])
>> >
>> > The reloc and compat payloads are placed at the end of the setup data
>> > as before, but increased in size to 512 bytes each, and then mapped
>> > non-1:1 into the RVA space.
>> >
>> > This works happily with both the existing PE loader as well as the
>> > audk one, but with the pedantic flags disabled.
>> >
>> 
>> This makes sense. I'll change this patch to use this layout and
>> to keep sections in headers.S before sending v5. (and I guess I'll
>> make the compressed kernel a part of .text). I have a few questions
>> though:
>> 
>> This layout assumes having the local copy of the bootparams as
>> in your RFC patches, right?
>> 
> 
> Indeed. Otherwise, the setup header may not have been copied to memory
> by the loader.
> 
>> Can I keep the .rodata -- 5th section fits in the section table
>> without much work?
>> 
> 
> You could, but at least the current PE/COFF loader in EDK2 will map it
> read/write, as it only distinguishes between executable sections and
> non-executable sections.
> 

At least it will slightly improve security for some implementations
(e.g. audk, while being overly strict support RO sections)

>> Also, why .reloc is at offset 0x2000 and not just 0x1000, is there
>> anything important I am missing? I understand that is cannot be 0
>> and should be aligned on page size, but nothing else comes to my
>> mind...
>> 
> 
> That was just arbitrary, because the raw allocations of reloc and
> compat are also allocated towards the end. But I guess starting at
> 0x1000 for .reloc makes more sense so feel free to change that.

Thanks for clarifications!

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes
  2023-03-11 17:39       ` Ard Biesheuvel
@ 2023-03-12 12:10         ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-12 12:10 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-11 20:39, Ard Biesheuvel wrote:
> On Sat, 11 Mar 2023 at 16:09, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> On 2023-03-10 18:20, Ard Biesheuvel wrote:
>> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> >>
>> >> Explicitly change sections memory attributes in efi_pe_entry in case
>> >> of incorrect EFI implementations and to reduce access rights to
>> >> compressed kernel blob. By default it is set executable due to
>> >> restriction in maximum number of sections that can fit before zero
>> >> page.
>> >>
>> >> Tested-by: Peter Jones <pjones@redhat.com>
>> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> >
>> > I don't think we need this patch. Firmware that cares about W^X will
>> > map the PE image with R-X for text/rodata and RW- for data/bss, which
>> > is sufficient, and firmware that doesn't is a lost cause anyway.
>> 
>> This patch were here mainly here to make .rodata non-executable and 
>> for
>> the UEFI handover protocol, for which attributes are usually not 
>> getting
>> applied.
>> 
>> Since the UEFI handover protocol is deprecated, I'll exclude patches
>> from
>> v5 and maybe submit it separately modified to apply attributes only 
>> when
>> booting via this protocol.
>> 
> 
> I think the issue here is that loaders that use the UEFI handover
> protocol use their own implementations of LoadImage/StartImage as
> well, and some of those tend to do little more than copy the image
> into memory and jump to the EFI handover protocol entry point, without
> even accounting for the image size in memory or clearing the bss.
> 

AFAIK this patch does not break loaders that load PE image as a flat
binary, since it only operates on ELF sections that are mapped 1-to-1.
But that's just the note for a future.

> To be honest, even though I understand the reason these had to be
> implemented, I'm a bit reluctant to cater for the needs of such
> loaders, given that these are all downstream distro forks of GRUB
> (with shim) with varying levels of adherence to the PE/COFF spec.
> 
> I'm happy to revisit this later if others feel this is important, but
> for the moment, I'd prefer it if we could focus on making the x86
> image work better with compliant loaders, which is what this series is
> primarily about.

That's very reasonable. I'll put this patch aside for now then.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub
  2023-03-11 17:27       ` Ard Biesheuvel
@ 2023-03-12 12:10         ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-12 12:10 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-11 20:27, Ard Biesheuvel wrote:
> On Sat, 11 Mar 2023 at 15:49, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> On 2023-03-10 17:59, Ard Biesheuvel wrote:
>> > On Thu, 15 Dec 2022 at 13:40, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> >>
>> >> This is required to fit more sections in PE section tables,
>> >> since its size is restricted by zero page located at specific offset
>> >> after the PE header.
>> >>
>> >> Tested-by: Mario Limonciello <mario.limonciello@amd.com>
>> >> Tested-by: Peter Jones <pjones@redhat.com>
>> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> >
>> > I'd prefer to rip this out altogether.
>> >
>> > https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?id=9510f6f04f579b9a3f54ad762c75ab2d905e37d8
>> 
>> Sounds great! Can I replace this patch with yours in v5?
>> 
> 
> Of course.
> 
>> >
>> > (and refer to the other thread in linux-efi@)
>> 
>> Which thread exactly? The one about the removal of
>> real-mode code?
>> 
> 
> Yes, this one
> 
> https://lore.kernel.org/linux-efi/20230308202209.2980947-1-ardb@kernel.org/

Thanks!

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 20/26] x86/build: Make generated PE more spec compliant
  2023-03-12 12:01         ` Evgeniy Baskov
@ 2023-03-12 13:09           ` Ard Biesheuvel
  2023-03-13  9:11             ` Evgeniy Baskov
  0 siblings, 1 reply; 78+ messages in thread
From: Ard Biesheuvel @ 2023-03-12 13:09 UTC (permalink / raw)
  To: Evgeniy Baskov
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On Sun, 12 Mar 2023 at 13:02, Evgeniy Baskov <baskov@ispras.ru> wrote:
>
> On 2023-03-11 20:31, Ard Biesheuvel wrote:
> > On Sat, 11 Mar 2023 at 16:02, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >>
> >> On 2023-03-10 18:17, Ard Biesheuvel wrote:
> >> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
> >> >>
> >> >> Currently kernel image is not fully compliant PE image, so it may
> >> >> fail to boot with stricter implementations of UEFI PE loaders.
> >> >>
> >> >> Set minimal alignments and sizes specified by PE documentation [1]
> >> >> referenced by UEFI specification [2]. Align PE header to 8 bytes.
> >> >>
> >> >> Generate PE sections dynamically. This simplifies code, since with
> >> >> current implementation all of the sections needs to be defined in
> >> >> header.S, where most section header fields do not hold valid values,
> >> >> except for their names. Before the change, it also held flags,
> >> >> but now flags depend on kernel configuration and it is simpler
> >> >> to set them from build.c too.
> >> >>
> >> >> Setup sections protection. Since we cannot fit every needed section,
> >> >> set a part of protection flags dynamically during initialization.
> >> >> This step is omitted if CONFIG_EFI_DXE_MEM_ATTRIBUTES is not set.
> >> >>
> >> >> [1]
> >> >> https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
> >> >> [2]
> >> >> https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf
> >> >>
> >> >> Tested-by: Peter Jones <pjones@redhat.com>
> >> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
> >> >
> >> > I would prefer it if we didn't rewrite the build tool this way.
> >> >
> >> > Having the sections in header.S in the order they appear in the binary
> >> > is rather useful, and I don't think we should manipulate the section
> >> > flags based on whether CONFIG_DXE_MEM_ATTRIBUTES is set. I also don't
> >> > think we need more than .text / .,data (as discussed in the other
> >> > thread on linux-efi@)
> >> >
> >> > Furthermore, I had a look at the audk PE loader [0], and I think it is
> >> > being overly pedantic.
> >> >
> >> > The PE/COFF spec does not require that all sections are virtually
> >> > contiguous, and it does not require that the file content is
> >> > completely covered by either the header or by a section.
> >> >
> >> > So what I would prefer to do is the following:
> >> >
> >> > Sections:
> >> > Idx Name          Size     VMA              Type
> >> >   0 .reloc        00000200 0000000000002000 DATA
> >> >   1 .compat       00000200 0000000000003000 DATA
> >> >   2 .text         00bee000 0000000000004000 TEXT
> >> >   3 .data         00002200 0000000000bf2000 DATA
> >> >
> >> > using 4k section alignment and 512 byte file alignment, and a header
> >> > size of 0x200 as before (This requires my patch that allows the setup
> >> > header to remain unmapped when running the stub [1])
> >> >
> >> > The reloc and compat payloads are placed at the end of the setup data
> >> > as before, but increased in size to 512 bytes each, and then mapped
> >> > non-1:1 into the RVA space.
> >> >
> >> > This works happily with both the existing PE loader as well as the
> >> > audk one, but with the pedantic flags disabled.
> >> >
> >>
> >> This makes sense. I'll change this patch to use this layout and
> >> to keep sections in headers.S before sending v5. (and I guess I'll
> >> make the compressed kernel a part of .text). I have a few questions
> >> though:
> >>
> >> This layout assumes having the local copy of the bootparams as
> >> in your RFC patches, right?
> >>
> >
> > Indeed. Otherwise, the setup header may not have been copied to memory
> > by the loader.
> >
> >> Can I keep the .rodata -- 5th section fits in the section table
> >> without much work?
> >>
> >
> > You could, but at least the current PE/COFF loader in EDK2 will map it
> > read/write, as it only distinguishes between executable sections and
> > non-executable sections.
> >
>
> At least it will slightly improve security for some implementations
> (e.g. audk, while being overly strict support RO sections)
>

Yeah, but more common loaders will put the compressed data in a
writable region. I'd prefer to have a simple and common baseline where
we always just use R-X for all text and rodata, and RW- for everything
else.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [PATCH v4 20/26] x86/build: Make generated PE more spec compliant
  2023-03-12 13:09           ` Ard Biesheuvel
@ 2023-03-13  9:11             ` Evgeniy Baskov
  0 siblings, 0 replies; 78+ messages in thread
From: Evgeniy Baskov @ 2023-03-13  9:11 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Borislav Petkov, Andy Lutomirski, Dave Hansen, Ingo Molnar,
	Peter Zijlstra, Thomas Gleixner, Alexey Khoroshilov, Peter Jones,
	Limonciello, Mario, joeyli, lvc-project, x86, linux-efi,
	linux-kernel, linux-hardening

On 2023-03-12 16:09, Ard Biesheuvel wrote:
> On Sun, 12 Mar 2023 at 13:02, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> 
>> On 2023-03-11 20:31, Ard Biesheuvel wrote:
>> > On Sat, 11 Mar 2023 at 16:02, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> >>
>> >> On 2023-03-10 18:17, Ard Biesheuvel wrote:
>> >> > On Thu, 15 Dec 2022 at 13:42, Evgeniy Baskov <baskov@ispras.ru> wrote:
>> >> >>
>> >> >> Currently kernel image is not fully compliant PE image, so it may
>> >> >> fail to boot with stricter implementations of UEFI PE loaders.
>> >> >>
>> >> >> Set minimal alignments and sizes specified by PE documentation [1]
>> >> >> referenced by UEFI specification [2]. Align PE header to 8 bytes.
>> >> >>
>> >> >> Generate PE sections dynamically. This simplifies code, since with
>> >> >> current implementation all of the sections needs to be defined in
>> >> >> header.S, where most section header fields do not hold valid values,
>> >> >> except for their names. Before the change, it also held flags,
>> >> >> but now flags depend on kernel configuration and it is simpler
>> >> >> to set them from build.c too.
>> >> >>
>> >> >> Setup sections protection. Since we cannot fit every needed section,
>> >> >> set a part of protection flags dynamically during initialization.
>> >> >> This step is omitted if CONFIG_EFI_DXE_MEM_ATTRIBUTES is not set.
>> >> >>
>> >> >> [1]
>> >> >> https://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/pecoff_v83.docx
>> >> >> [2]
>> >> >> https://uefi.org/sites/default/files/resources/UEFI_Spec_2_9_2021_03_18.pdf
>> >> >>
>> >> >> Tested-by: Peter Jones <pjones@redhat.com>
>> >> >> Signed-off-by: Evgeniy Baskov <baskov@ispras.ru>
>> >> >
>> >> > I would prefer it if we didn't rewrite the build tool this way.
>> >> >
>> >> > Having the sections in header.S in the order they appear in the binary
>> >> > is rather useful, and I don't think we should manipulate the section
>> >> > flags based on whether CONFIG_DXE_MEM_ATTRIBUTES is set. I also don't
>> >> > think we need more than .text / .,data (as discussed in the other
>> >> > thread on linux-efi@)
>> >> >
>> >> > Furthermore, I had a look at the audk PE loader [0], and I think it is
>> >> > being overly pedantic.
>> >> >
>> >> > The PE/COFF spec does not require that all sections are virtually
>> >> > contiguous, and it does not require that the file content is
>> >> > completely covered by either the header or by a section.
>> >> >
>> >> > So what I would prefer to do is the following:
>> >> >
>> >> > Sections:
>> >> > Idx Name          Size     VMA              Type
>> >> >   0 .reloc        00000200 0000000000002000 DATA
>> >> >   1 .compat       00000200 0000000000003000 DATA
>> >> >   2 .text         00bee000 0000000000004000 TEXT
>> >> >   3 .data         00002200 0000000000bf2000 DATA
>> >> >
>> >> > using 4k section alignment and 512 byte file alignment, and a header
>> >> > size of 0x200 as before (This requires my patch that allows the setup
>> >> > header to remain unmapped when running the stub [1])
>> >> >
>> >> > The reloc and compat payloads are placed at the end of the setup data
>> >> > as before, but increased in size to 512 bytes each, and then mapped
>> >> > non-1:1 into the RVA space.
>> >> >
>> >> > This works happily with both the existing PE loader as well as the
>> >> > audk one, but with the pedantic flags disabled.
>> >> >
>> >>
>> >> This makes sense. I'll change this patch to use this layout and
>> >> to keep sections in headers.S before sending v5. (and I guess I'll
>> >> make the compressed kernel a part of .text). I have a few questions
>> >> though:
>> >>
>> >> This layout assumes having the local copy of the bootparams as
>> >> in your RFC patches, right?
>> >>
>> >
>> > Indeed. Otherwise, the setup header may not have been copied to memory
>> > by the loader.
>> >
>> >> Can I keep the .rodata -- 5th section fits in the section table
>> >> without much work?
>> >>
>> >
>> > You could, but at least the current PE/COFF loader in EDK2 will map it
>> > read/write, as it only distinguishes between executable sections and
>> > non-executable sections.
>> >
>> 
>> At least it will slightly improve security for some implementations
>> (e.g. audk, while being overly strict support RO sections)
>> 
> 
> Yeah, but more common loaders will put the compressed data in a
> writable region. I'd prefer to have a simple and common baseline where
> we always just use R-X for all text and rodata, and RW- for everything
> else.

Hmm... I'll remove the .rodata for now then. If anything changes I can
always submit it as a separate patch later anyways.

^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2023-03-13  9:12 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-15 12:37 [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Evgeniy Baskov
2022-12-15 12:37 ` [PATCH v4 01/26] x86/boot: Align vmlinuz sections on page size Evgeniy Baskov
2023-03-10 14:43   ` Ard Biesheuvel
2023-03-11 14:30     ` Evgeniy Baskov
2023-03-11 14:42       ` Ard Biesheuvel
2022-12-15 12:37 ` [PATCH v4 02/26] x86/build: Remove RWX sections and align on 4KB Evgeniy Baskov
2023-03-10 14:45   ` Ard Biesheuvel
2023-03-11 14:31     ` Evgeniy Baskov
2022-12-15 12:37 ` [PATCH v4 03/26] x86/boot: Set cr0 to known state in trampoline Evgeniy Baskov
2023-03-10 14:48   ` Ard Biesheuvel
2022-12-15 12:37 ` [PATCH v4 04/26] x86/boot: Increase boot page table size Evgeniy Baskov
2023-03-08  9:24   ` Ard Biesheuvel
2022-12-15 12:37 ` [PATCH v4 05/26] x86/boot: Support 4KB pages for identity mapping Evgeniy Baskov
2023-03-08  9:42   ` Ard Biesheuvel
2023-03-08 16:11     ` Evgeniy Baskov
2022-12-15 12:37 ` [PATCH v4 06/26] x86/boot: Setup memory protection for bzImage code Evgeniy Baskov
2023-03-08 10:47   ` Ard Biesheuvel
2023-03-08 16:15     ` Evgeniy Baskov
2022-12-15 12:37 ` [PATCH v4 07/26] x86/build: Check W^X of vmlinux during build Evgeniy Baskov
2023-03-08  9:34   ` Ard Biesheuvel
2023-03-08 16:05     ` Evgeniy Baskov
2022-12-15 12:37 ` [PATCH v4 08/26] x86/boot: Map memory explicitly Evgeniy Baskov
2023-03-08  9:38   ` Ard Biesheuvel
2023-03-08 10:28     ` Ard Biesheuvel
2023-03-08 16:09       ` Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 09/26] x86/boot: Remove mapping from page fault handler Evgeniy Baskov
2023-03-10 14:49   ` Ard Biesheuvel
2022-12-15 12:38 ` [PATCH v4 10/26] efi/libstub: Move helper function to related file Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 11/26] x86/boot: Make console interface more abstract Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 12/26] x86/boot: Make kernel_add_identity_map() a pointer Evgeniy Baskov
2023-03-10 14:52   ` Ard Biesheuvel
2023-03-11 14:34     ` Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 13/26] x86/boot: Split trampoline and pt init code Evgeniy Baskov
2023-03-10 14:56   ` Ard Biesheuvel
2023-03-11 14:37     ` Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 14/26] x86/boot: Add EFI kernel extraction interface Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 15/26] efi/x86: Support extracting kernel from libstub Evgeniy Baskov
2023-03-09 16:00   ` Ard Biesheuvel
2023-03-09 17:05     ` Evgeniy Baskov
2023-03-09 16:49   ` Ard Biesheuvel
2023-03-09 17:10     ` Evgeniy Baskov
2023-03-09 17:11       ` Ard Biesheuvel
2023-03-10 15:08   ` Ard Biesheuvel
2022-12-15 12:38 ` [PATCH v4 16/26] x86/boot: Reduce lower limit of physical KASLR Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 17/26] x86/boot: Reduce size of the DOS stub Evgeniy Baskov
2023-03-10 14:59   ` Ard Biesheuvel
2023-03-11 14:49     ` Evgeniy Baskov
2023-03-11 17:27       ` Ard Biesheuvel
2023-03-12 12:10         ` Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 18/26] tools/include: Add simplified version of pe.h Evgeniy Baskov
2023-03-10 15:01   ` Ard Biesheuvel
2022-12-15 12:38 ` [PATCH v4 19/26] x86/build: Cleanup tools/build.c Evgeniy Baskov
2023-03-09 15:57   ` Ard Biesheuvel
2023-03-09 16:25     ` Evgeniy Baskov
2023-03-09 16:50       ` Ard Biesheuvel
2023-03-09 17:22         ` Evgeniy Baskov
2023-03-09 17:37           ` Ard Biesheuvel
2022-12-15 12:38 ` [PATCH v4 20/26] x86/build: Make generated PE more spec compliant Evgeniy Baskov
2023-03-10 15:17   ` Ard Biesheuvel
2023-03-11 15:02     ` Evgeniy Baskov
2023-03-11 17:31       ` Ard Biesheuvel
2023-03-12 12:01         ` Evgeniy Baskov
2023-03-12 13:09           ` Ard Biesheuvel
2023-03-13  9:11             ` Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 21/26] efi/x86: Explicitly set sections memory attributes Evgeniy Baskov
2023-03-10 15:20   ` Ard Biesheuvel
2023-03-11 15:09     ` Evgeniy Baskov
2023-03-11 17:39       ` Ard Biesheuvel
2023-03-12 12:10         ` Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 22/26] efi/libstub: Add memory attribute protocol definitions Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 23/26] efi/libstub: Use memory attribute protocol Evgeniy Baskov
2023-03-10 16:13   ` Ard Biesheuvel
2023-03-11 15:14     ` Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 24/26] efi/libstub: make memory protection warnings include newlines Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 25/26] efi/x86: don't try to set page attributes on 0-sized regions Evgeniy Baskov
2022-12-15 12:38 ` [PATCH v4 26/26] efi/x86: don't set unsupported memory attributes Evgeniy Baskov
2022-12-15 19:21 ` [PATCH v4 00/26] x86_64: Improvements at compressed kernel stage Peter Jones
2022-12-19 14:08   ` Evgeniy Baskov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).