linux-riscv.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Improve kernel section protections
@ 2020-10-26 23:02 Atish Patra
  2020-10-26 23:02 ` [PATCH v2 1/6] RISC-V: Move __start_kernel to .head.text Atish Patra
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Atish Patra @ 2020-10-26 23:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-riscv, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel, Mike Rapoport

This series aims at improving kernel permissions by doing following things.

1. Protect kernel sections early instead of after /init.
2. Protect .init.text & .init.data sections with appropriate permissions.
3. Move dynamic relocation section to _init.
4. Moved .init sections after .text. This is what most of the other archs
   are also doing.

After applying this patch, here are the linear mapped sections.

---[ Linear mapping ]---
0xffffffe000000000-0xffffffe000800000    0x0000000080200000         8M PMD     D A . . X . R V
0xffffffe000800000-0xffffffe000c00000    0x0000000080a00000         4M PMD     D A . . . W R V
0xffffffe000c00000-0xffffffe001200000    0x0000000080e00000         6M PMD     D A . . . . R V
0xffffffe001200000-0xffffffe03fe00000    0x0000000081400000      1004M PMD     D A . . . W R V

Changes from v1->v2:
1. .init.text section is aligned with SECTION_ALIGN.
2. .init.text is moved to below of .text so that .head.text & .text are in
   one section.
3. We don't need Guo's fix for static object issue.
4. Rebased on 5.10-rc1.

Atish Patra (6):
RISC-V: Move __start_kernel to .head.text
RISC-V: Initialize SBI early
RISC-V: Enforce protections for kernel sections early
RISC-V: Align the .init.text section
RISC-V: Protect .init.text & .init.data
RISC-V: Move dynamic relocation section under __init

arch/riscv/include/asm/sections.h   |  2 +
arch/riscv/include/asm/set_memory.h |  4 ++
arch/riscv/kernel/head.S            |  1 -
arch/riscv/kernel/setup.c           | 18 +++++++--
arch/riscv/kernel/vmlinux.lds.S     | 63 +++++++++++++++++------------
arch/riscv/mm/init.c                | 19 +++++++--
arch/riscv/mm/pageattr.c            |  6 +++
7 files changed, 79 insertions(+), 34 deletions(-)

--
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/6] RISC-V: Move __start_kernel to .head.text
  2020-10-26 23:02 [PATCH v2 0/6] Improve kernel section protections Atish Patra
@ 2020-10-26 23:02 ` Atish Patra
  2020-10-26 23:02 ` [PATCH v2 2/6] RISC-V: Initialize SBI early Atish Patra
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2020-10-26 23:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-riscv, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel, Mike Rapoport

Currently, __start_kernel is kept in _init while _start is in head section.
This may result in "relocation truncated to fit error" if _init section is
moved far from head. It also makes sense to keep entire head.S in one
section.

Keep __start_kernel in head section rather than _init.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/kernel/head.S | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/riscv/kernel/head.S b/arch/riscv/kernel/head.S
index 11e2a4fe66e0..45dbdae930bf 100644
--- a/arch/riscv/kernel/head.S
+++ b/arch/riscv/kernel/head.S
@@ -177,7 +177,6 @@ setup_trap_vector:
 
 END(_start)
 
-	__INIT
 ENTRY(_start_kernel)
 	/* Mask all interrupts */
 	csrw CSR_IE, zero
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 2/6] RISC-V: Initialize SBI early
  2020-10-26 23:02 [PATCH v2 0/6] Improve kernel section protections Atish Patra
  2020-10-26 23:02 ` [PATCH v2 1/6] RISC-V: Move __start_kernel to .head.text Atish Patra
@ 2020-10-26 23:02 ` Atish Patra
  2020-10-27 10:04   ` Mike Rapoport
  2020-10-26 23:02 ` [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early Atish Patra
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2020-10-26 23:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-riscv, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel, Mike Rapoport

Currently, SBI is initialized towards the end of arch setup. This prevents
the set memory operations to be invoked earlier as it requires a full tlb
flush.

Initialize SBI as early as possible.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/kernel/setup.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index c424cc6dd833..7d6a04ae3929 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -89,6 +89,9 @@ void __init setup_arch(char **cmdline_p)
 		pr_err("No DTB found in kernel mappings\n");
 #endif
 
+#if IS_ENABLED(CONFIG_RISCV_SBI)
+	sbi_init();
+#endif
 #ifdef CONFIG_SWIOTLB
 	swiotlb_init(1);
 #endif
@@ -97,10 +100,6 @@ void __init setup_arch(char **cmdline_p)
 	kasan_init();
 #endif
 
-#if IS_ENABLED(CONFIG_RISCV_SBI)
-	sbi_init();
-#endif
-
 #ifdef CONFIG_SMP
 	setup_smp();
 #endif
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early
  2020-10-26 23:02 [PATCH v2 0/6] Improve kernel section protections Atish Patra
  2020-10-26 23:02 ` [PATCH v2 1/6] RISC-V: Move __start_kernel to .head.text Atish Patra
  2020-10-26 23:02 ` [PATCH v2 2/6] RISC-V: Initialize SBI early Atish Patra
@ 2020-10-26 23:02 ` Atish Patra
  2020-10-27 10:00   ` Mike Rapoport
  2020-10-26 23:02 ` [PATCH v2 4/6] RISC-V: Align the .init.text section Atish Patra
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2020-10-26 23:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-riscv, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel, Mike Rapoport

Currently, all memblocks are mapped with PAGE_KERNEL_EXEC and the strict
permissions are only enforced after /init starts. This leaves the kernel
vulnerable from possible buggy built-in modules.

Apply permissions to individual sections as early as possible.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/set_memory.h |  2 ++
 arch/riscv/kernel/setup.c           |  2 ++
 arch/riscv/mm/init.c                | 11 +++++++++--
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 4c5bae7ca01c..4cc3a4e2afd3 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -15,11 +15,13 @@ int set_memory_ro(unsigned long addr, int numpages);
 int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
+void protect_kernel_text_data(void);
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
+static inline void protect_kernel_text_data(void) {};
 #endif
 
 int set_direct_map_invalid_noflush(struct page *page);
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index 7d6a04ae3929..b722c5bf892c 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -22,6 +22,7 @@
 #include <asm/cpu_ops.h>
 #include <asm/early_ioremap.h>
 #include <asm/setup.h>
+#include <asm/set_memory.h>
 #include <asm/sections.h>
 #include <asm/sbi.h>
 #include <asm/tlbflush.h>
@@ -92,6 +93,7 @@ void __init setup_arch(char **cmdline_p)
 #if IS_ENABLED(CONFIG_RISCV_SBI)
 	sbi_init();
 #endif
+	protect_kernel_text_data();
 #ifdef CONFIG_SWIOTLB
 	swiotlb_init(1);
 #endif
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index ea933b789a88..5f196f8158d4 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -608,7 +608,7 @@ static inline void setup_vm_final(void)
 #endif /* CONFIG_MMU */
 
 #ifdef CONFIG_STRICT_KERNEL_RWX
-void mark_rodata_ro(void)
+void protect_kernel_text_data(void)
 {
 	unsigned long text_start = (unsigned long)_text;
 	unsigned long text_end = (unsigned long)_etext;
@@ -617,9 +617,16 @@ void mark_rodata_ro(void)
 	unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn)));
 
 	set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT);
-	set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
 	set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
 	set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT);
+}
+
+void mark_rodata_ro(void)
+{
+	unsigned long rodata_start = (unsigned long)__start_rodata;
+	unsigned long data_start = (unsigned long)_data;
+
+	set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
 
 	debug_checkwx();
 }
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 4/6] RISC-V: Align the .init.text section
  2020-10-26 23:02 [PATCH v2 0/6] Improve kernel section protections Atish Patra
                   ` (2 preceding siblings ...)
  2020-10-26 23:02 ` [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early Atish Patra
@ 2020-10-26 23:02 ` Atish Patra
  2020-10-26 23:02 ` [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data Atish Patra
  2020-10-26 23:02 ` [PATCH v2 6/6] RISC-V: Move dynamic relocation section under __init Atish Patra
  5 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2020-10-26 23:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-riscv, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel, Mike Rapoport, Jim Wilson

In order to improve kernel text protection, we need separate .init.text/
.init.data/.text in separate sections. However, RISC-V linker relaxation
code is not aware of any alignment between sections. As a result, it may
relax any RISCV_CALL relocations between sections to JAL without realizing
that an inter section alignment may move the address farther. That may
lead to a relocation truncated fit error. However, linker relaxation code
is aware of the individual section alignments.

The detailed discussion on this issue can be found here.
https://github.com/riscv/riscv-gnu-toolchain/issues/738

Keep the .init.text section aligned so that linker relaxation will take
that as a hint while relaxing inter section calls.
Here are the code size changes for each section because of this change.

section         change in size (in bytes)
  .head.text      +4
  .text           +40
  .init.text      +6530
  .exit.text      +84

The only significant increase in size happened for .init.text because
all intra relocations also use 2MB alignment.

Suggested-by: Jim Wilson <jimw@sifive.com>
Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/kernel/vmlinux.lds.S | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
index 3ffbd6cbdb86..cacd7898ba7f 100644
--- a/arch/riscv/kernel/vmlinux.lds.S
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -30,7 +30,13 @@ SECTIONS
 	. = ALIGN(PAGE_SIZE);
 
 	__init_begin = .;
-	INIT_TEXT_SECTION(PAGE_SIZE)
+	__init_text_begin = .;
+	.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) ALIGN(SECTION_ALIGN) { \
+		_sinittext = .;						\
+		INIT_TEXT						\
+		_einittext = .;						\
+	}
+
 	. = ALIGN(8);
 	__soc_early_init_table : {
 		__soc_early_init_table_start = .;
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data
  2020-10-26 23:02 [PATCH v2 0/6] Improve kernel section protections Atish Patra
                   ` (3 preceding siblings ...)
  2020-10-26 23:02 ` [PATCH v2 4/6] RISC-V: Align the .init.text section Atish Patra
@ 2020-10-26 23:02 ` Atish Patra
  2020-10-27 10:45   ` Mike Rapoport
  2020-10-26 23:02 ` [PATCH v2 6/6] RISC-V: Move dynamic relocation section under __init Atish Patra
  5 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2020-10-26 23:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-riscv, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel, Mike Rapoport

Currently, .init.text & .init.data are intermixed which makes it impossible
apply different permissions to them. .init.data shouldn't need exec
permissions while .init.text shouldn't have write permission.

Keep them in separate sections so that different permissions are applied to
each section. This improves the kernel protection under
CONFIG_STRICT_KERNEL_RWX. We also need to restore the permissions for the
entire _init section after it is freed so that those pages can be used for
other purpose.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/include/asm/sections.h   |  2 ++
 arch/riscv/include/asm/set_memory.h |  2 ++
 arch/riscv/kernel/setup.c           |  9 +++++
 arch/riscv/kernel/vmlinux.lds.S     | 51 ++++++++++++++++-------------
 arch/riscv/mm/init.c                |  8 ++++-
 arch/riscv/mm/pageattr.c            |  6 ++++
 6 files changed, 54 insertions(+), 24 deletions(-)

diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h
index 3a9971b1210f..1595c5b60cfd 100644
--- a/arch/riscv/include/asm/sections.h
+++ b/arch/riscv/include/asm/sections.h
@@ -9,5 +9,7 @@
 
 extern char _start[];
 extern char _start_kernel[];
+extern char __init_data_begin[], __init_data_end[];
+extern char __init_text_begin[], __init_text_end[];
 
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 4cc3a4e2afd3..913429c9c1ae 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -15,6 +15,7 @@ int set_memory_ro(unsigned long addr, int numpages);
 int set_memory_rw(unsigned long addr, int numpages);
 int set_memory_x(unsigned long addr, int numpages);
 int set_memory_nx(unsigned long addr, int numpages);
+int set_memory_default(unsigned long addr, int numpages);
 void protect_kernel_text_data(void);
 #else
 static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
@@ -22,6 +23,7 @@ static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 static inline void protect_kernel_text_data(void) {};
+static inline int set_memory_default(unsigned long addr, int numpages) { return 0; }
 #endif
 
 int set_direct_map_invalid_noflush(struct page *page);
diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
index b722c5bf892c..abfbdc8cfef3 100644
--- a/arch/riscv/kernel/setup.c
+++ b/arch/riscv/kernel/setup.c
@@ -123,3 +123,12 @@ static int __init topology_init(void)
 	return 0;
 }
 subsys_initcall(topology_init);
+
+void free_initmem(void)
+{
+	unsigned long init_begin = (unsigned long)__init_begin;
+	unsigned long init_end = (unsigned long)__init_end;
+
+	set_memory_default(init_begin, (init_end - init_begin) >> PAGE_SHIFT);
+	free_initmem_default(POISON_FREE_INITMEM);
+}
diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
index cacd7898ba7f..0a1874e48e8a 100644
--- a/arch/riscv/kernel/vmlinux.lds.S
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -29,6 +29,26 @@ SECTIONS
 	HEAD_TEXT_SECTION
 	. = ALIGN(PAGE_SIZE);
 
+	.text : {
+		_text = .;
+		_stext = .;
+		TEXT_TEXT
+		SCHED_TEXT
+		CPUIDLE_TEXT
+		LOCK_TEXT
+		KPROBES_TEXT
+		ENTRY_TEXT
+		IRQENTRY_TEXT
+		SOFTIRQENTRY_TEXT
+		*(.fixup)
+		_etext = .;
+	}
+
+#ifdef CONFIG_EFI
+	. = ALIGN(PECOFF_SECTION_ALIGNMENT);
+	__pecoff_text_end = .;
+#endif
+	. = ALIGN(SECTION_ALIGN);
 	__init_begin = .;
 	__init_text_begin = .;
 	.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) ALIGN(SECTION_ALIGN) { \
@@ -53,35 +73,20 @@ SECTIONS
 	{
 		EXIT_TEXT
 	}
+
+	__init_text_end = .;
+	. = ALIGN(SECTION_ALIGN);
+	/* Start of init data section */
+	__init_data_begin = .;
+	INIT_DATA_SECTION(16)
 	.exit.data :
 	{
 		EXIT_DATA
 	}
 	PERCPU_SECTION(L1_CACHE_BYTES)
-	__init_end = .;
 
-	. = ALIGN(SECTION_ALIGN);
-	.text : {
-		_text = .;
-		_stext = .;
-		TEXT_TEXT
-		SCHED_TEXT
-		CPUIDLE_TEXT
-		LOCK_TEXT
-		KPROBES_TEXT
-		ENTRY_TEXT
-		IRQENTRY_TEXT
-		SOFTIRQENTRY_TEXT
-		*(.fixup)
-		_etext = .;
-	}
-
-#ifdef CONFIG_EFI
-	. = ALIGN(PECOFF_SECTION_ALIGNMENT);
-	__pecoff_text_end = .;
-#endif
-
-	INIT_DATA_SECTION(16)
+	__init_data_end = .;
+	__init_end = .;
 
 	/* Start of data section */
 	_sdata = .;
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 5f196f8158d4..1bb3821d81d5 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -610,13 +610,19 @@ static inline void setup_vm_final(void)
 #ifdef CONFIG_STRICT_KERNEL_RWX
 void protect_kernel_text_data(void)
 {
-	unsigned long text_start = (unsigned long)_text;
+	unsigned long text_start = (unsigned long)_start;
 	unsigned long text_end = (unsigned long)_etext;
+	unsigned long init_text_start = (unsigned long)__init_text_begin;
+	unsigned long init_text_end = (unsigned long)__init_text_end;
+	unsigned long init_data_start = (unsigned long)__init_data_begin;
+	unsigned long init_data_end = (unsigned long)__init_data_end;
 	unsigned long rodata_start = (unsigned long)__start_rodata;
 	unsigned long data_start = (unsigned long)_data;
 	unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn)));
 
+	set_memory_ro(init_text_start, (init_text_end - init_text_start) >> PAGE_SHIFT);
 	set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT);
+	set_memory_nx(init_data_start, (init_data_end - init_data_start) >> PAGE_SHIFT);
 	set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
 	set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT);
 }
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 19fecb362d81..04f3fc16aa9c 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -128,6 +128,12 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
 	return ret;
 }
 
+int set_memory_default(unsigned long addr, int numpages)
+{
+	return __set_memory(addr, numpages, __pgprot(_PAGE_KERNEL),
+			    __pgprot(_PAGE_EXEC));
+}
+
 int set_memory_ro(unsigned long addr, int numpages)
 {
 	return __set_memory(addr, numpages, __pgprot(_PAGE_READ),
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 6/6] RISC-V: Move dynamic relocation section under __init
  2020-10-26 23:02 [PATCH v2 0/6] Improve kernel section protections Atish Patra
                   ` (4 preceding siblings ...)
  2020-10-26 23:02 ` [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data Atish Patra
@ 2020-10-26 23:02 ` Atish Patra
  5 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2020-10-26 23:02 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-riscv, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel, Mike Rapoport

Dynamic relocation section are only required during boot. Those sections
can be freed after init. Thus, it can be moved to __init section.

Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/kernel/vmlinux.lds.S | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/riscv/kernel/vmlinux.lds.S b/arch/riscv/kernel/vmlinux.lds.S
index 0a1874e48e8a..64c5e74008b7 100644
--- a/arch/riscv/kernel/vmlinux.lds.S
+++ b/arch/riscv/kernel/vmlinux.lds.S
@@ -85,6 +85,10 @@ SECTIONS
 	}
 	PERCPU_SECTION(L1_CACHE_BYTES)
 
+	.rel.dyn : {
+		*(.rel.dyn*)
+	}
+
 	__init_data_end = .;
 	__init_end = .;
 
@@ -116,10 +120,6 @@ SECTIONS
 
 	BSS_SECTION(PAGE_SIZE, PAGE_SIZE, 0)
 
-	.rel.dyn : {
-		*(.rel.dyn*)
-	}
-
 #ifdef CONFIG_EFI
 	. = ALIGN(PECOFF_SECTION_ALIGNMENT);
 	__pecoff_data_virt_size = ABSOLUTE(. - __pecoff_text_end);
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early
  2020-10-26 23:02 ` [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early Atish Patra
@ 2020-10-27 10:00   ` Mike Rapoport
  2020-10-27 18:38     ` Atish Patra
  0 siblings, 1 reply; 15+ messages in thread
From: Mike Rapoport @ 2020-10-27 10:00 UTC (permalink / raw)
  To: Atish Patra
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-kernel, linux-riscv,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel

On Mon, Oct 26, 2020 at 04:02:51PM -0700, Atish Patra wrote:
> Currently, all memblocks are mapped with PAGE_KERNEL_EXEC and the strict
> permissions are only enforced after /init starts. This leaves the kernel
> vulnerable from possible buggy built-in modules.
> 
> Apply permissions to individual sections as early as possible.
> 
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> ---
>  arch/riscv/include/asm/set_memory.h |  2 ++
>  arch/riscv/kernel/setup.c           |  2 ++
>  arch/riscv/mm/init.c                | 11 +++++++++--
>  3 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> index 4c5bae7ca01c..4cc3a4e2afd3 100644
> --- a/arch/riscv/include/asm/set_memory.h
> +++ b/arch/riscv/include/asm/set_memory.h
> @@ -15,11 +15,13 @@ int set_memory_ro(unsigned long addr, int numpages);
>  int set_memory_rw(unsigned long addr, int numpages);
>  int set_memory_x(unsigned long addr, int numpages);
>  int set_memory_nx(unsigned long addr, int numpages);
> +void protect_kernel_text_data(void);
>  #else
>  static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
>  static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
>  static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
>  static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
> +static inline void protect_kernel_text_data(void) {};
>  #endif
>  
>  int set_direct_map_invalid_noflush(struct page *page);
> diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> index 7d6a04ae3929..b722c5bf892c 100644
> --- a/arch/riscv/kernel/setup.c
> +++ b/arch/riscv/kernel/setup.c
> @@ -22,6 +22,7 @@
>  #include <asm/cpu_ops.h>
>  #include <asm/early_ioremap.h>
>  #include <asm/setup.h>
> +#include <asm/set_memory.h>
>  #include <asm/sections.h>
>  #include <asm/sbi.h>
>  #include <asm/tlbflush.h>
> @@ -92,6 +93,7 @@ void __init setup_arch(char **cmdline_p)
>  #if IS_ENABLED(CONFIG_RISCV_SBI)
>  	sbi_init();
>  #endif
> +	protect_kernel_text_data();
>  #ifdef CONFIG_SWIOTLB
>  	swiotlb_init(1);
>  #endif
> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> index ea933b789a88..5f196f8158d4 100644
> --- a/arch/riscv/mm/init.c
> +++ b/arch/riscv/mm/init.c
> @@ -608,7 +608,7 @@ static inline void setup_vm_final(void)
>  #endif /* CONFIG_MMU */
>  
>  #ifdef CONFIG_STRICT_KERNEL_RWX
> -void mark_rodata_ro(void)
> +void protect_kernel_text_data(void)
>  {
>  	unsigned long text_start = (unsigned long)_text;
>  	unsigned long text_end = (unsigned long)_etext;
> @@ -617,9 +617,16 @@ void mark_rodata_ro(void)
>  	unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn)));
>  

A comment about that rodata permissions are set later would be nice
here.

>  	set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT);
> -	set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
>  	set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
>  	set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT);
> +}
> +
> +void mark_rodata_ro(void)
> +{
> +	unsigned long rodata_start = (unsigned long)__start_rodata;
> +	unsigned long data_start = (unsigned long)_data;
> +
> +	set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
>  
>  	debug_checkwx();
>  }
> -- 
> 2.25.1
> 

-- 
Sincerely yours,
Mike.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/6] RISC-V: Initialize SBI early
  2020-10-26 23:02 ` [PATCH v2 2/6] RISC-V: Initialize SBI early Atish Patra
@ 2020-10-27 10:04   ` Mike Rapoport
  2020-10-27 18:38     ` Atish Patra
  0 siblings, 1 reply; 15+ messages in thread
From: Mike Rapoport @ 2020-10-27 10:04 UTC (permalink / raw)
  To: Atish Patra
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-kernel, linux-riscv,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel

On Mon, Oct 26, 2020 at 04:02:50PM -0700, Atish Patra wrote:
> Currently, SBI is initialized towards the end of arch setup. This prevents
> the set memory operations to be invoked earlier as it requires a full tlb
> flush.
> 
> Initialize SBI as early as possible.
> 
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> ---
>  arch/riscv/kernel/setup.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> index c424cc6dd833..7d6a04ae3929 100644
> --- a/arch/riscv/kernel/setup.c
> +++ b/arch/riscv/kernel/setup.c
> @@ -89,6 +89,9 @@ void __init setup_arch(char **cmdline_p)
>  		pr_err("No DTB found in kernel mappings\n");
>  #endif
>  
> +#if IS_ENABLED(CONFIG_RISCV_SBI)

Maybe
	if (IS_ENABLED(CONFIG_RISCV_SBI))
		sbi_init()

> +	sbi_init();
> +#endif
>  #ifdef CONFIG_SWIOTLB
>  	swiotlb_init(1);
>  #endif
> @@ -97,10 +100,6 @@ void __init setup_arch(char **cmdline_p)
>  	kasan_init();
>  #endif
>  
> -#if IS_ENABLED(CONFIG_RISCV_SBI)
> -	sbi_init();
> -#endif
> -
>  #ifdef CONFIG_SMP
>  	setup_smp();
>  #endif
> -- 
> 2.25.1
> 

-- 
Sincerely yours,
Mike.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data
  2020-10-26 23:02 ` [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data Atish Patra
@ 2020-10-27 10:45   ` Mike Rapoport
  2020-10-29 19:21     ` Atish Patra
  0 siblings, 1 reply; 15+ messages in thread
From: Mike Rapoport @ 2020-10-27 10:45 UTC (permalink / raw)
  To: Atish Patra
  Cc: Albert Ou, Kees Cook, Anup Patel, linux-kernel, linux-riscv,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	Andrew Morton, Borislav Petkov, Michel Lespinasse,
	Ard Biesheuvel

On Mon, Oct 26, 2020 at 04:02:53PM -0700, Atish Patra wrote:
> Currently, .init.text & .init.data are intermixed which makes it impossible
> apply different permissions to them. .init.data shouldn't need exec
> permissions while .init.text shouldn't have write permission.
> 
> Keep them in separate sections so that different permissions are applied to
> each section. This improves the kernel protection under
> CONFIG_STRICT_KERNEL_RWX. We also need to restore the permissions for the
> entire _init section after it is freed so that those pages can be used for
> other purpose.
> 
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> ---
>  arch/riscv/include/asm/sections.h   |  2 ++
>  arch/riscv/include/asm/set_memory.h |  2 ++
>  arch/riscv/kernel/setup.c           |  9 +++++
>  arch/riscv/kernel/vmlinux.lds.S     | 51 ++++++++++++++++-------------
>  arch/riscv/mm/init.c                |  8 ++++-
>  arch/riscv/mm/pageattr.c            |  6 ++++
>  6 files changed, 54 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h
> index 3a9971b1210f..1595c5b60cfd 100644
> --- a/arch/riscv/include/asm/sections.h
> +++ b/arch/riscv/include/asm/sections.h
> @@ -9,5 +9,7 @@
>  
>  extern char _start[];
>  extern char _start_kernel[];
> +extern char __init_data_begin[], __init_data_end[];
> +extern char __init_text_begin[], __init_text_end[];
>  
>  #endif /* __ASM_SECTIONS_H */
> diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> index 4cc3a4e2afd3..913429c9c1ae 100644
> --- a/arch/riscv/include/asm/set_memory.h
> +++ b/arch/riscv/include/asm/set_memory.h
> @@ -15,6 +15,7 @@ int set_memory_ro(unsigned long addr, int numpages);
>  int set_memory_rw(unsigned long addr, int numpages);
>  int set_memory_x(unsigned long addr, int numpages);
>  int set_memory_nx(unsigned long addr, int numpages);
> +int set_memory_default(unsigned long addr, int numpages);
>  void protect_kernel_text_data(void);
>  #else
>  static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
> @@ -22,6 +23,7 @@ static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
>  static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
>  static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
>  static inline void protect_kernel_text_data(void) {};
> +static inline int set_memory_default(unsigned long addr, int numpages) { return 0; }
>  #endif
>  
>  int set_direct_map_invalid_noflush(struct page *page);
> diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> index b722c5bf892c..abfbdc8cfef3 100644
> --- a/arch/riscv/kernel/setup.c
> +++ b/arch/riscv/kernel/setup.c
> @@ -123,3 +123,12 @@ static int __init topology_init(void)
>  	return 0;
>  }
>  subsys_initcall(topology_init);
> +
> +void free_initmem(void)
> +{
> +	unsigned long init_begin = (unsigned long)__init_begin;
> +	unsigned long init_end = (unsigned long)__init_end;
> +
> +	set_memory_default(init_begin, (init_end - init_begin) >> PAGE_SHIFT);

And what does "default" imply?
Maybe set_memory_rw() would better name ...

> +	free_initmem_default(POISON_FREE_INITMEM);
> +}

...

> diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> index 19fecb362d81..04f3fc16aa9c 100644
> --- a/arch/riscv/mm/pageattr.c
> +++ b/arch/riscv/mm/pageattr.c
> @@ -128,6 +128,12 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
>  	return ret;
>  }
>  
> +int set_memory_default(unsigned long addr, int numpages)
> +{
> +	return __set_memory(addr, numpages, __pgprot(_PAGE_KERNEL),
> +			    __pgprot(_PAGE_EXEC));

... because you'd need to find what _PAGE_KERNEL is, do bitwise ops and
than find out that default is apparently RW :)

> +}
> +
>  int set_memory_ro(unsigned long addr, int numpages)
>  {
>  	return __set_memory(addr, numpages, __pgprot(_PAGE_READ),
> -- 
> 2.25.1
> 

-- 
Sincerely yours,
Mike.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early
  2020-10-27 10:00   ` Mike Rapoport
@ 2020-10-27 18:38     ` Atish Patra
  0 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2020-10-27 18:38 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Albert Ou, Kees Cook, Anup Patel,
	linux-kernel@vger.kernel.org List, Ard Biesheuvel, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	linux-riscv, Borislav Petkov, Michel Lespinasse, Andrew Morton

On Tue, Oct 27, 2020 at 3:01 AM Mike Rapoport <rppt@kernel.org> wrote:
>
> On Mon, Oct 26, 2020 at 04:02:51PM -0700, Atish Patra wrote:
> > Currently, all memblocks are mapped with PAGE_KERNEL_EXEC and the strict
> > permissions are only enforced after /init starts. This leaves the kernel
> > vulnerable from possible buggy built-in modules.
> >
> > Apply permissions to individual sections as early as possible.
> >
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > ---
> >  arch/riscv/include/asm/set_memory.h |  2 ++
> >  arch/riscv/kernel/setup.c           |  2 ++
> >  arch/riscv/mm/init.c                | 11 +++++++++--
> >  3 files changed, 13 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> > index 4c5bae7ca01c..4cc3a4e2afd3 100644
> > --- a/arch/riscv/include/asm/set_memory.h
> > +++ b/arch/riscv/include/asm/set_memory.h
> > @@ -15,11 +15,13 @@ int set_memory_ro(unsigned long addr, int numpages);
> >  int set_memory_rw(unsigned long addr, int numpages);
> >  int set_memory_x(unsigned long addr, int numpages);
> >  int set_memory_nx(unsigned long addr, int numpages);
> > +void protect_kernel_text_data(void);
> >  #else
> >  static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
> >  static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
> >  static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
> >  static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
> > +static inline void protect_kernel_text_data(void) {};
> >  #endif
> >
> >  int set_direct_map_invalid_noflush(struct page *page);
> > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > index 7d6a04ae3929..b722c5bf892c 100644
> > --- a/arch/riscv/kernel/setup.c
> > +++ b/arch/riscv/kernel/setup.c
> > @@ -22,6 +22,7 @@
> >  #include <asm/cpu_ops.h>
> >  #include <asm/early_ioremap.h>
> >  #include <asm/setup.h>
> > +#include <asm/set_memory.h>
> >  #include <asm/sections.h>
> >  #include <asm/sbi.h>
> >  #include <asm/tlbflush.h>
> > @@ -92,6 +93,7 @@ void __init setup_arch(char **cmdline_p)
> >  #if IS_ENABLED(CONFIG_RISCV_SBI)
> >       sbi_init();
> >  #endif
> > +     protect_kernel_text_data();
> >  #ifdef CONFIG_SWIOTLB
> >       swiotlb_init(1);
> >  #endif
> > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
> > index ea933b789a88..5f196f8158d4 100644
> > --- a/arch/riscv/mm/init.c
> > +++ b/arch/riscv/mm/init.c
> > @@ -608,7 +608,7 @@ static inline void setup_vm_final(void)
> >  #endif /* CONFIG_MMU */
> >
> >  #ifdef CONFIG_STRICT_KERNEL_RWX
> > -void mark_rodata_ro(void)
> > +void protect_kernel_text_data(void)
> >  {
> >       unsigned long text_start = (unsigned long)_text;
> >       unsigned long text_end = (unsigned long)_etext;
> > @@ -617,9 +617,16 @@ void mark_rodata_ro(void)
> >       unsigned long max_low = (unsigned long)(__va(PFN_PHYS(max_low_pfn)));
> >
>
> A comment about that rodata permissions are set later would be nice
> here.
>

Sure. I will add that.

> >       set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT);
> > -     set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
> >       set_memory_nx(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
> >       set_memory_nx(data_start, (max_low - data_start) >> PAGE_SHIFT);
> > +}
> > +
> > +void mark_rodata_ro(void)
> > +{
> > +     unsigned long rodata_start = (unsigned long)__start_rodata;
> > +     unsigned long data_start = (unsigned long)_data;
> > +
> > +     set_memory_ro(rodata_start, (data_start - rodata_start) >> PAGE_SHIFT);
> >
> >       debug_checkwx();
> >  }
> > --
> > 2.25.1
> >
>
> --
> Sincerely yours,
> Mike.
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv



-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/6] RISC-V: Initialize SBI early
  2020-10-27 10:04   ` Mike Rapoport
@ 2020-10-27 18:38     ` Atish Patra
  0 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2020-10-27 18:38 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Albert Ou, Kees Cook, Anup Patel,
	linux-kernel@vger.kernel.org List, Ard Biesheuvel, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	linux-riscv, Borislav Petkov, Michel Lespinasse, Andrew Morton

On Tue, Oct 27, 2020 at 3:04 AM Mike Rapoport <rppt@kernel.org> wrote:
>
> On Mon, Oct 26, 2020 at 04:02:50PM -0700, Atish Patra wrote:
> > Currently, SBI is initialized towards the end of arch setup. This prevents
> > the set memory operations to be invoked earlier as it requires a full tlb
> > flush.
> >
> > Initialize SBI as early as possible.
> >
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > ---
> >  arch/riscv/kernel/setup.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > index c424cc6dd833..7d6a04ae3929 100644
> > --- a/arch/riscv/kernel/setup.c
> > +++ b/arch/riscv/kernel/setup.c
> > @@ -89,6 +89,9 @@ void __init setup_arch(char **cmdline_p)
> >               pr_err("No DTB found in kernel mappings\n");
> >  #endif
> >
> > +#if IS_ENABLED(CONFIG_RISCV_SBI)
>
> Maybe
>         if (IS_ENABLED(CONFIG_RISCV_SBI))
>                 sbi_init()
>

ok. Will update.

> > +     sbi_init();
> > +#endif
> >  #ifdef CONFIG_SWIOTLB
> >       swiotlb_init(1);
> >  #endif
> > @@ -97,10 +100,6 @@ void __init setup_arch(char **cmdline_p)
> >       kasan_init();
> >  #endif
> >
> > -#if IS_ENABLED(CONFIG_RISCV_SBI)
> > -     sbi_init();
> > -#endif
> > -
> >  #ifdef CONFIG_SMP
> >       setup_smp();
> >  #endif
> > --
> > 2.25.1
> >
>
> --
> Sincerely yours,
> Mike.
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv



-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data
  2020-10-27 10:45   ` Mike Rapoport
@ 2020-10-29 19:21     ` Atish Patra
  2020-10-30  8:49       ` Mike Rapoport
  0 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2020-10-29 19:21 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Albert Ou, Kees Cook, Anup Patel,
	linux-kernel@vger.kernel.org List, Ard Biesheuvel, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	linux-riscv, Borislav Petkov, Michel Lespinasse, Andrew Morton

On Tue, Oct 27, 2020 at 3:46 AM Mike Rapoport <rppt@kernel.org> wrote:
>
> On Mon, Oct 26, 2020 at 04:02:53PM -0700, Atish Patra wrote:
> > Currently, .init.text & .init.data are intermixed which makes it impossible
> > apply different permissions to them. .init.data shouldn't need exec
> > permissions while .init.text shouldn't have write permission.
> >
> > Keep them in separate sections so that different permissions are applied to
> > each section. This improves the kernel protection under
> > CONFIG_STRICT_KERNEL_RWX. We also need to restore the permissions for the
> > entire _init section after it is freed so that those pages can be used for
> > other purpose.
> >
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > ---
> >  arch/riscv/include/asm/sections.h   |  2 ++
> >  arch/riscv/include/asm/set_memory.h |  2 ++
> >  arch/riscv/kernel/setup.c           |  9 +++++
> >  arch/riscv/kernel/vmlinux.lds.S     | 51 ++++++++++++++++-------------
> >  arch/riscv/mm/init.c                |  8 ++++-
> >  arch/riscv/mm/pageattr.c            |  6 ++++
> >  6 files changed, 54 insertions(+), 24 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h
> > index 3a9971b1210f..1595c5b60cfd 100644
> > --- a/arch/riscv/include/asm/sections.h
> > +++ b/arch/riscv/include/asm/sections.h
> > @@ -9,5 +9,7 @@
> >
> >  extern char _start[];
> >  extern char _start_kernel[];
> > +extern char __init_data_begin[], __init_data_end[];
> > +extern char __init_text_begin[], __init_text_end[];
> >
> >  #endif /* __ASM_SECTIONS_H */
> > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> > index 4cc3a4e2afd3..913429c9c1ae 100644
> > --- a/arch/riscv/include/asm/set_memory.h
> > +++ b/arch/riscv/include/asm/set_memory.h
> > @@ -15,6 +15,7 @@ int set_memory_ro(unsigned long addr, int numpages);
> >  int set_memory_rw(unsigned long addr, int numpages);
> >  int set_memory_x(unsigned long addr, int numpages);
> >  int set_memory_nx(unsigned long addr, int numpages);
> > +int set_memory_default(unsigned long addr, int numpages);
> >  void protect_kernel_text_data(void);
> >  #else
> >  static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
> > @@ -22,6 +23,7 @@ static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
> >  static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
> >  static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
> >  static inline void protect_kernel_text_data(void) {};
> > +static inline int set_memory_default(unsigned long addr, int numpages) { return 0; }
> >  #endif
> >
> >  int set_direct_map_invalid_noflush(struct page *page);
> > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > index b722c5bf892c..abfbdc8cfef3 100644
> > --- a/arch/riscv/kernel/setup.c
> > +++ b/arch/riscv/kernel/setup.c
> > @@ -123,3 +123,12 @@ static int __init topology_init(void)
> >       return 0;
> >  }
> >  subsys_initcall(topology_init);
> > +
> > +void free_initmem(void)
> > +{
> > +     unsigned long init_begin = (unsigned long)__init_begin;
> > +     unsigned long init_end = (unsigned long)__init_end;
> > +
> > +     set_memory_default(init_begin, (init_end - init_begin) >> PAGE_SHIFT);
>
> And what does "default" imply?
> Maybe set_memory_rw() would better name ...
>
> > +     free_initmem_default(POISON_FREE_INITMEM);
> > +}
>
> ...
>
> > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> > index 19fecb362d81..04f3fc16aa9c 100644
> > --- a/arch/riscv/mm/pageattr.c
> > +++ b/arch/riscv/mm/pageattr.c
> > @@ -128,6 +128,12 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
> >       return ret;
> >  }
> >
> > +int set_memory_default(unsigned long addr, int numpages)
> > +{
> > +     return __set_memory(addr, numpages, __pgprot(_PAGE_KERNEL),
> > +                         __pgprot(_PAGE_EXEC));
>
> ... because you'd need to find what _PAGE_KERNEL is, do bitwise ops and
> than find out that default is apparently RW :)
>

Yeah. But We have explicitly disable the EXECUTE bit as these pages were marked
with RWX earlier. set_memory_rw makes sure that RW bits are set but
doesn't disable
the X bit.

> > +}
> > +
> >  int set_memory_ro(unsigned long addr, int numpages)
> >  {
> >       return __set_memory(addr, numpages, __pgprot(_PAGE_READ),
> > --
> > 2.25.1
> >
>
> --
> Sincerely yours,
> Mike.
>
> _______________________________________________
> linux-riscv mailing list
> linux-riscv@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-riscv



-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data
  2020-10-29 19:21     ` Atish Patra
@ 2020-10-30  8:49       ` Mike Rapoport
  2020-10-30 20:29         ` Atish Patra
  0 siblings, 1 reply; 15+ messages in thread
From: Mike Rapoport @ 2020-10-30  8:49 UTC (permalink / raw)
  To: Atish Patra
  Cc: Albert Ou, Kees Cook, Anup Patel,
	linux-kernel@vger.kernel.org List, Ard Biesheuvel, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	linux-riscv, Borislav Petkov, Michel Lespinasse, Andrew Morton

On Thu, Oct 29, 2020 at 12:21:41PM -0700, Atish Patra wrote:
> On Tue, Oct 27, 2020 at 3:46 AM Mike Rapoport <rppt@kernel.org> wrote:
> >
> > On Mon, Oct 26, 2020 at 04:02:53PM -0700, Atish Patra wrote:
> > > Currently, .init.text & .init.data are intermixed which makes it impossible
> > > apply different permissions to them. .init.data shouldn't need exec
> > > permissions while .init.text shouldn't have write permission.
> > >
> > > Keep them in separate sections so that different permissions are applied to
> > > each section. This improves the kernel protection under
> > > CONFIG_STRICT_KERNEL_RWX. We also need to restore the permissions for the
> > > entire _init section after it is freed so that those pages can be used for
> > > other purpose.
> > >
> > > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > > ---
> > >  arch/riscv/include/asm/sections.h   |  2 ++
> > >  arch/riscv/include/asm/set_memory.h |  2 ++
> > >  arch/riscv/kernel/setup.c           |  9 +++++
> > >  arch/riscv/kernel/vmlinux.lds.S     | 51 ++++++++++++++++-------------
> > >  arch/riscv/mm/init.c                |  8 ++++-
> > >  arch/riscv/mm/pageattr.c            |  6 ++++
> > >  6 files changed, 54 insertions(+), 24 deletions(-)
> > >
> > > diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h
> > > index 3a9971b1210f..1595c5b60cfd 100644
> > > --- a/arch/riscv/include/asm/sections.h
> > > +++ b/arch/riscv/include/asm/sections.h
> > > @@ -9,5 +9,7 @@
> > >
> > >  extern char _start[];
> > >  extern char _start_kernel[];
> > > +extern char __init_data_begin[], __init_data_end[];
> > > +extern char __init_text_begin[], __init_text_end[];
> > >
> > >  #endif /* __ASM_SECTIONS_H */
> > > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> > > index 4cc3a4e2afd3..913429c9c1ae 100644
> > > --- a/arch/riscv/include/asm/set_memory.h
> > > +++ b/arch/riscv/include/asm/set_memory.h
> > > @@ -15,6 +15,7 @@ int set_memory_ro(unsigned long addr, int numpages);
> > >  int set_memory_rw(unsigned long addr, int numpages);
> > >  int set_memory_x(unsigned long addr, int numpages);
> > >  int set_memory_nx(unsigned long addr, int numpages);
> > > +int set_memory_default(unsigned long addr, int numpages);
> > >  void protect_kernel_text_data(void);
> > >  #else
> > >  static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
> > > @@ -22,6 +23,7 @@ static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
> > >  static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
> > >  static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
> > >  static inline void protect_kernel_text_data(void) {};
> > > +static inline int set_memory_default(unsigned long addr, int numpages) { return 0; }
> > >  #endif
> > >
> > >  int set_direct_map_invalid_noflush(struct page *page);
> > > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > > index b722c5bf892c..abfbdc8cfef3 100644
> > > --- a/arch/riscv/kernel/setup.c
> > > +++ b/arch/riscv/kernel/setup.c
> > > @@ -123,3 +123,12 @@ static int __init topology_init(void)
> > >       return 0;
> > >  }
> > >  subsys_initcall(topology_init);
> > > +
> > > +void free_initmem(void)
> > > +{
> > > +     unsigned long init_begin = (unsigned long)__init_begin;
> > > +     unsigned long init_end = (unsigned long)__init_end;
> > > +
> > > +     set_memory_default(init_begin, (init_end - init_begin) >> PAGE_SHIFT);
> >
> > And what does "default" imply?
> > Maybe set_memory_rw() would better name ...
> >
> > > +     free_initmem_default(POISON_FREE_INITMEM);
> > > +}
> >
> > ...
> >
> > > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> > > index 19fecb362d81..04f3fc16aa9c 100644
> > > --- a/arch/riscv/mm/pageattr.c
> > > +++ b/arch/riscv/mm/pageattr.c
> > > @@ -128,6 +128,12 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
> > >       return ret;
> > >  }
> > >
> > > +int set_memory_default(unsigned long addr, int numpages)
> > > +{
> > > +     return __set_memory(addr, numpages, __pgprot(_PAGE_KERNEL),
> > > +                         __pgprot(_PAGE_EXEC));
> >
> > ... because you'd need to find what _PAGE_KERNEL is, do bitwise ops and
> > than find out that default is apparently RW :)
> >
> 
> Yeah. But We have explicitly disable the EXECUTE bit as these pages were marked
> with RWX earlier. set_memory_rw makes sure that RW bits are set but
> doesn't disable
> the X bit.

Maybe set_memory_rw_nx() then?
Then there will be no ambiguity about what this function does.

Besides, having set_memory_default() and set_direct_map_default() with
different masks would be confusing :)

> > > +}
> > > +
> > >  int set_memory_ro(unsigned long addr, int numpages)
> > >  {
> > >       return __set_memory(addr, numpages, __pgprot(_PAGE_READ),
> > > --
> > > 2.25.1
> > >
> >
> > --
> > Sincerely yours,
> > Mike.
> >
> > _______________________________________________
> > linux-riscv mailing list
> > linux-riscv@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-riscv
> 
> 
> 
> -- 
> Regards,
> Atish

-- 
Sincerely yours,
Mike.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data
  2020-10-30  8:49       ` Mike Rapoport
@ 2020-10-30 20:29         ` Atish Patra
  0 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2020-10-30 20:29 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Albert Ou, Kees Cook, Anup Patel,
	linux-kernel@vger.kernel.org List, Ard Biesheuvel, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Greentime Hu,
	linux-riscv, Borislav Petkov, Michel Lespinasse, Andrew Morton

On Fri, Oct 30, 2020 at 1:49 AM Mike Rapoport <rppt@kernel.org> wrote:
>
> On Thu, Oct 29, 2020 at 12:21:41PM -0700, Atish Patra wrote:
> > On Tue, Oct 27, 2020 at 3:46 AM Mike Rapoport <rppt@kernel.org> wrote:
> > >
> > > On Mon, Oct 26, 2020 at 04:02:53PM -0700, Atish Patra wrote:
> > > > Currently, .init.text & .init.data are intermixed which makes it impossible
> > > > apply different permissions to them. .init.data shouldn't need exec
> > > > permissions while .init.text shouldn't have write permission.
> > > >
> > > > Keep them in separate sections so that different permissions are applied to
> > > > each section. This improves the kernel protection under
> > > > CONFIG_STRICT_KERNEL_RWX. We also need to restore the permissions for the
> > > > entire _init section after it is freed so that those pages can be used for
> > > > other purpose.
> > > >
> > > > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> > > > ---
> > > >  arch/riscv/include/asm/sections.h   |  2 ++
> > > >  arch/riscv/include/asm/set_memory.h |  2 ++
> > > >  arch/riscv/kernel/setup.c           |  9 +++++
> > > >  arch/riscv/kernel/vmlinux.lds.S     | 51 ++++++++++++++++-------------
> > > >  arch/riscv/mm/init.c                |  8 ++++-
> > > >  arch/riscv/mm/pageattr.c            |  6 ++++
> > > >  6 files changed, 54 insertions(+), 24 deletions(-)
> > > >
> > > > diff --git a/arch/riscv/include/asm/sections.h b/arch/riscv/include/asm/sections.h
> > > > index 3a9971b1210f..1595c5b60cfd 100644
> > > > --- a/arch/riscv/include/asm/sections.h
> > > > +++ b/arch/riscv/include/asm/sections.h
> > > > @@ -9,5 +9,7 @@
> > > >
> > > >  extern char _start[];
> > > >  extern char _start_kernel[];
> > > > +extern char __init_data_begin[], __init_data_end[];
> > > > +extern char __init_text_begin[], __init_text_end[];
> > > >
> > > >  #endif /* __ASM_SECTIONS_H */
> > > > diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
> > > > index 4cc3a4e2afd3..913429c9c1ae 100644
> > > > --- a/arch/riscv/include/asm/set_memory.h
> > > > +++ b/arch/riscv/include/asm/set_memory.h
> > > > @@ -15,6 +15,7 @@ int set_memory_ro(unsigned long addr, int numpages);
> > > >  int set_memory_rw(unsigned long addr, int numpages);
> > > >  int set_memory_x(unsigned long addr, int numpages);
> > > >  int set_memory_nx(unsigned long addr, int numpages);
> > > > +int set_memory_default(unsigned long addr, int numpages);
> > > >  void protect_kernel_text_data(void);
> > > >  #else
> > > >  static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; }
> > > > @@ -22,6 +23,7 @@ static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; }
> > > >  static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
> > > >  static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
> > > >  static inline void protect_kernel_text_data(void) {};
> > > > +static inline int set_memory_default(unsigned long addr, int numpages) { return 0; }
> > > >  #endif
> > > >
> > > >  int set_direct_map_invalid_noflush(struct page *page);
> > > > diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
> > > > index b722c5bf892c..abfbdc8cfef3 100644
> > > > --- a/arch/riscv/kernel/setup.c
> > > > +++ b/arch/riscv/kernel/setup.c
> > > > @@ -123,3 +123,12 @@ static int __init topology_init(void)
> > > >       return 0;
> > > >  }
> > > >  subsys_initcall(topology_init);
> > > > +
> > > > +void free_initmem(void)
> > > > +{
> > > > +     unsigned long init_begin = (unsigned long)__init_begin;
> > > > +     unsigned long init_end = (unsigned long)__init_end;
> > > > +
> > > > +     set_memory_default(init_begin, (init_end - init_begin) >> PAGE_SHIFT);
> > >
> > > And what does "default" imply?
> > > Maybe set_memory_rw() would better name ...
> > >
> > > > +     free_initmem_default(POISON_FREE_INITMEM);
> > > > +}
> > >
> > > ...
> > >
> > > > diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> > > > index 19fecb362d81..04f3fc16aa9c 100644
> > > > --- a/arch/riscv/mm/pageattr.c
> > > > +++ b/arch/riscv/mm/pageattr.c
> > > > @@ -128,6 +128,12 @@ static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
> > > >       return ret;
> > > >  }
> > > >
> > > > +int set_memory_default(unsigned long addr, int numpages)
> > > > +{
> > > > +     return __set_memory(addr, numpages, __pgprot(_PAGE_KERNEL),
> > > > +                         __pgprot(_PAGE_EXEC));
> > >
> > > ... because you'd need to find what _PAGE_KERNEL is, do bitwise ops and
> > > than find out that default is apparently RW :)
> > >
> >
> > Yeah. But We have explicitly disable the EXECUTE bit as these pages were marked
> > with RWX earlier. set_memory_rw makes sure that RW bits are set but
> > doesn't disable
> > the X bit.
>
> Maybe set_memory_rw_nx() then?
> Then there will be no ambiguity about what this function does.
>

Sure. I will do that in v2.

> Besides, having set_memory_default() and set_direct_map_default() with
> different masks would be confusing :)
>

Of course :).

> > > > +}
> > > > +
> > > >  int set_memory_ro(unsigned long addr, int numpages)
> > > >  {
> > > >       return __set_memory(addr, numpages, __pgprot(_PAGE_READ),
> > > > --
> > > > 2.25.1
> > > >
> > >
> > > --
> > > Sincerely yours,
> > > Mike.
> > >
> > > _______________________________________________
> > > linux-riscv mailing list
> > > linux-riscv@lists.infradead.org
> > > http://lists.infradead.org/mailman/listinfo/linux-riscv
> >
> >
> >
> > --
> > Regards,
> > Atish
>
> --
> Sincerely yours,
> Mike.



-- 
Regards,
Atish

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-10-30 20:29 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26 23:02 [PATCH v2 0/6] Improve kernel section protections Atish Patra
2020-10-26 23:02 ` [PATCH v2 1/6] RISC-V: Move __start_kernel to .head.text Atish Patra
2020-10-26 23:02 ` [PATCH v2 2/6] RISC-V: Initialize SBI early Atish Patra
2020-10-27 10:04   ` Mike Rapoport
2020-10-27 18:38     ` Atish Patra
2020-10-26 23:02 ` [PATCH v2 3/6] RISC-V: Enforce protections for kernel sections early Atish Patra
2020-10-27 10:00   ` Mike Rapoport
2020-10-27 18:38     ` Atish Patra
2020-10-26 23:02 ` [PATCH v2 4/6] RISC-V: Align the .init.text section Atish Patra
2020-10-26 23:02 ` [PATCH v2 5/6] RISC-V: Protect .init.text & .init.data Atish Patra
2020-10-27 10:45   ` Mike Rapoport
2020-10-29 19:21     ` Atish Patra
2020-10-30  8:49       ` Mike Rapoport
2020-10-30 20:29         ` Atish Patra
2020-10-26 23:02 ` [PATCH v2 6/6] RISC-V: Move dynamic relocation section under __init Atish Patra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).