All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-10 17:16 ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening,
	andre.przywara, Ard Biesheuvel

Having memory that is writable and executable at the same time is a
security hazard, and so we tend to avoid those when we can. However,
at boot time, we keep .text mapped writable during the entire init
phase, and the init region itself is mapped rwx as well.

Let's improve the situation by:
- making the alternatives patching use the linear mapping
- splitting the init region into separate text and data regions

This removes all RWX mappings except the really early one created
in head.S (which we could perhaps fix in the future as well)

Ard Biesheuvel (4):
  arm: kvm: move kvm_vgic_global_state out of .text section
  arm64: alternatives: apply boot time fixups via the linear mapping
  arm64: mmu: map .text as read-only from the outset
  arm64: mmu: apply strict permissions to .init.text and .init.data

 arch/arm64/include/asm/mmu.h      |  1 +
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/alternative.c   |  6 +--
 arch/arm64/kernel/smp.c           |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++-----
 arch/arm64/mm/init.c              |  3 +-
 arch/arm64/mm/mmu.c               | 42 ++++++++++++++------
 virt/kvm/arm/vgic/vgic.c          |  4 +-
 8 files changed, 64 insertions(+), 28 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-10 17:16 ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

Having memory that is writable and executable at the same time is a
security hazard, and so we tend to avoid those when we can. However,
at boot time, we keep .text mapped writable during the entire init
phase, and the init region itself is mapped rwx as well.

Let's improve the situation by:
- making the alternatives patching use the linear mapping
- splitting the init region into separate text and data regions

This removes all RWX mappings except the really early one created
in head.S (which we could perhaps fix in the future as well)

Ard Biesheuvel (4):
  arm: kvm: move kvm_vgic_global_state out of .text section
  arm64: alternatives: apply boot time fixups via the linear mapping
  arm64: mmu: map .text as read-only from the outset
  arm64: mmu: apply strict permissions to .init.text and .init.data

 arch/arm64/include/asm/mmu.h      |  1 +
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/alternative.c   |  6 +--
 arch/arm64/kernel/smp.c           |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++-----
 arch/arm64/mm/init.c              |  3 +-
 arch/arm64/mm/mmu.c               | 42 ++++++++++++++------
 virt/kvm/arm/vgic/vgic.c          |  4 +-
 8 files changed, 64 insertions(+), 28 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-10 17:16 ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening,
	andre.przywara, Ard Biesheuvel

Having memory that is writable and executable at the same time is a
security hazard, and so we tend to avoid those when we can. However,
at boot time, we keep .text mapped writable during the entire init
phase, and the init region itself is mapped rwx as well.

Let's improve the situation by:
- making the alternatives patching use the linear mapping
- splitting the init region into separate text and data regions

This removes all RWX mappings except the really early one created
in head.S (which we could perhaps fix in the future as well)

Ard Biesheuvel (4):
  arm: kvm: move kvm_vgic_global_state out of .text section
  arm64: alternatives: apply boot time fixups via the linear mapping
  arm64: mmu: map .text as read-only from the outset
  arm64: mmu: apply strict permissions to .init.text and .init.data

 arch/arm64/include/asm/mmu.h      |  1 +
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/alternative.c   |  6 +--
 arch/arm64/kernel/smp.c           |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++-----
 arch/arm64/mm/init.c              |  3 +-
 arch/arm64/mm/mmu.c               | 42 ++++++++++++++------
 virt/kvm/arm/vgic/vgic.c          |  4 +-
 8 files changed, 64 insertions(+), 28 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/4] arm: kvm: move kvm_vgic_global_state out of .text section
  2017-02-10 17:16 ` Ard Biesheuvel
  (?)
@ 2017-02-10 17:16   ` Ard Biesheuvel
  -1 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: Ard Biesheuvel, marc.zyngier, andre.przywara, kernel-hardening, kvmarm

The kvm_vgic_global_state struct contains a static key which is
written to by jump_label_init() at boot time. So in preparation of
making .text regions truly (well, almost truly) read-only, mark
kvm_vgic_global_state __ro_after_init so it moves to the .rodata
section instead.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 virt/kvm/arm/vgic/vgic.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 6440b56ec90e..2f373455ed4e 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -29,7 +29,9 @@
 #define DEBUG_SPINLOCK_BUG_ON(p)
 #endif
 
-struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
+struct vgic_global kvm_vgic_global_state __ro_after_init = {
+	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
+};
 
 /*
  * Locking order is always:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 1/4] arm: kvm: move kvm_vgic_global_state out of .text section
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

The kvm_vgic_global_state struct contains a static key which is
written to by jump_label_init() at boot time. So in preparation of
making .text regions truly (well, almost truly) read-only, mark
kvm_vgic_global_state __ro_after_init so it moves to the .rodata
section instead.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 virt/kvm/arm/vgic/vgic.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 6440b56ec90e..2f373455ed4e 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -29,7 +29,9 @@
 #define DEBUG_SPINLOCK_BUG_ON(p)
 #endif
 
-struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
+struct vgic_global kvm_vgic_global_state __ro_after_init = {
+	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
+};
 
 /*
  * Locking order is always:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCH 1/4] arm: kvm: move kvm_vgic_global_state out of .text section
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening,
	andre.przywara, Ard Biesheuvel

The kvm_vgic_global_state struct contains a static key which is
written to by jump_label_init() at boot time. So in preparation of
making .text regions truly (well, almost truly) read-only, mark
kvm_vgic_global_state __ro_after_init so it moves to the .rodata
section instead.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 virt/kvm/arm/vgic/vgic.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 6440b56ec90e..2f373455ed4e 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -29,7 +29,9 @@
 #define DEBUG_SPINLOCK_BUG_ON(p)
 #endif
 
-struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
+struct vgic_global kvm_vgic_global_state __ro_after_init = {
+	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
+};
 
 /*
  * Locking order is always:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
  2017-02-10 17:16 ` Ard Biesheuvel
  (?)
@ 2017-02-10 17:16   ` Ard Biesheuvel
  -1 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: Ard Biesheuvel, marc.zyngier, andre.przywara, kernel-hardening, kvmarm

One important rule of thumb when designing a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.

For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu.h    |  1 +
 arch/arm64/kernel/alternative.c |  6 ++---
 arch/arm64/kernel/smp.c         |  1 +
 arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
 4 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 47619411f0ff..5468c834b072 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
+extern void mark_linear_text_alias_ro(void);
 
 #endif
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 06d650f61da7..eacdbcc45630 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
 
 		pr_info_once("patching kernel code\n");
 
-		origptr = ALT_ORIG_PTR(alt);
+		origptr = lm_alias(ALT_ORIG_PTR(alt));
 		replptr = ALT_REPL_PTR(alt);
 		nr_inst = alt->alt_len / sizeof(insn);
 
@@ -131,8 +131,8 @@ static void __apply_alternatives(void *alt_region)
 			*(origptr + i) = cpu_to_le32(insn);
 		}
 
-		flush_icache_range((uintptr_t)origptr,
-				   (uintptr_t)(origptr + nr_inst));
+		flush_icache_range((uintptr_t)ALT_ORIG_PTR(alt),
+				   (uintptr_t)(ALT_ORIG_PTR(alt) + nr_inst));
 	}
 }
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a8ec5da530af..d6307e311a10 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	setup_cpu_features();
 	hyp_mode_check();
 	apply_alternatives_all();
+	mark_linear_text_alias_ro();
 }
 
 void __init smp_prepare_boot_cpu(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2131521ddc24..f4b045d1cc53 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -395,16 +395,31 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     debug_pagealloc_enabled());
 
 	/*
-	 * Map the linear alias of the [_text, __init_begin) interval as
-	 * read-only/non-executable. This makes the contents of the
-	 * region accessible to subsystems such as hibernate, but
-	 * protects it from inadvertent modification or execution.
+	 * Map the linear alias of the [_text, __init_begin) interval
+	 * as non-executable now, and remove the write permission in
+	 * mark_linear_text_alias_ro() below (which will be called after
+	 * alternative patching has completed). This makes the contents
+	 * of the region accessible to subsystems such as hibernate,
+	 * but protects it from inadvertent modification or execution.
 	 */
 	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
-			     kernel_end - kernel_start, PAGE_KERNEL_RO,
+			     kernel_end - kernel_start, PAGE_KERNEL,
 			     early_pgtable_alloc, debug_pagealloc_enabled());
 }
 
+void mark_linear_text_alias_ro(void)
+{
+	/*
+	 * Remove the write permissions from the linear alias of .text/.rodata
+	 */
+	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
+			    (unsigned long)__init_begin - (unsigned long)_text,
+			    PAGE_KERNEL_RO);
+
+	/* flush the TLBs after updating live kernel mappings */
+	flush_tlb_all();
+}
+
 static void __init map_mem(pgd_t *pgd)
 {
 	struct memblock_region *reg;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

One important rule of thumb when designing a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.

For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu.h    |  1 +
 arch/arm64/kernel/alternative.c |  6 ++---
 arch/arm64/kernel/smp.c         |  1 +
 arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
 4 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 47619411f0ff..5468c834b072 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
+extern void mark_linear_text_alias_ro(void);
 
 #endif
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 06d650f61da7..eacdbcc45630 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
 
 		pr_info_once("patching kernel code\n");
 
-		origptr = ALT_ORIG_PTR(alt);
+		origptr = lm_alias(ALT_ORIG_PTR(alt));
 		replptr = ALT_REPL_PTR(alt);
 		nr_inst = alt->alt_len / sizeof(insn);
 
@@ -131,8 +131,8 @@ static void __apply_alternatives(void *alt_region)
 			*(origptr + i) = cpu_to_le32(insn);
 		}
 
-		flush_icache_range((uintptr_t)origptr,
-				   (uintptr_t)(origptr + nr_inst));
+		flush_icache_range((uintptr_t)ALT_ORIG_PTR(alt),
+				   (uintptr_t)(ALT_ORIG_PTR(alt) + nr_inst));
 	}
 }
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a8ec5da530af..d6307e311a10 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	setup_cpu_features();
 	hyp_mode_check();
 	apply_alternatives_all();
+	mark_linear_text_alias_ro();
 }
 
 void __init smp_prepare_boot_cpu(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2131521ddc24..f4b045d1cc53 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -395,16 +395,31 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     debug_pagealloc_enabled());
 
 	/*
-	 * Map the linear alias of the [_text, __init_begin) interval as
-	 * read-only/non-executable. This makes the contents of the
-	 * region accessible to subsystems such as hibernate, but
-	 * protects it from inadvertent modification or execution.
+	 * Map the linear alias of the [_text, __init_begin) interval
+	 * as non-executable now, and remove the write permission in
+	 * mark_linear_text_alias_ro() below (which will be called after
+	 * alternative patching has completed). This makes the contents
+	 * of the region accessible to subsystems such as hibernate,
+	 * but protects it from inadvertent modification or execution.
 	 */
 	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
-			     kernel_end - kernel_start, PAGE_KERNEL_RO,
+			     kernel_end - kernel_start, PAGE_KERNEL,
 			     early_pgtable_alloc, debug_pagealloc_enabled());
 }
 
+void mark_linear_text_alias_ro(void)
+{
+	/*
+	 * Remove the write permissions from the linear alias of .text/.rodata
+	 */
+	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
+			    (unsigned long)__init_begin - (unsigned long)_text,
+			    PAGE_KERNEL_RO);
+
+	/* flush the TLBs after updating live kernel mappings */
+	flush_tlb_all();
+}
+
 static void __init map_mem(pgd_t *pgd)
 {
 	struct memblock_region *reg;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening,
	andre.przywara, Ard Biesheuvel

One important rule of thumb when designing a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.

For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu.h    |  1 +
 arch/arm64/kernel/alternative.c |  6 ++---
 arch/arm64/kernel/smp.c         |  1 +
 arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
 4 files changed, 25 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 47619411f0ff..5468c834b072 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
+extern void mark_linear_text_alias_ro(void);
 
 #endif
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 06d650f61da7..eacdbcc45630 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
 
 		pr_info_once("patching kernel code\n");
 
-		origptr = ALT_ORIG_PTR(alt);
+		origptr = lm_alias(ALT_ORIG_PTR(alt));
 		replptr = ALT_REPL_PTR(alt);
 		nr_inst = alt->alt_len / sizeof(insn);
 
@@ -131,8 +131,8 @@ static void __apply_alternatives(void *alt_region)
 			*(origptr + i) = cpu_to_le32(insn);
 		}
 
-		flush_icache_range((uintptr_t)origptr,
-				   (uintptr_t)(origptr + nr_inst));
+		flush_icache_range((uintptr_t)ALT_ORIG_PTR(alt),
+				   (uintptr_t)(ALT_ORIG_PTR(alt) + nr_inst));
 	}
 }
 
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a8ec5da530af..d6307e311a10 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	setup_cpu_features();
 	hyp_mode_check();
 	apply_alternatives_all();
+	mark_linear_text_alias_ro();
 }
 
 void __init smp_prepare_boot_cpu(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2131521ddc24..f4b045d1cc53 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -395,16 +395,31 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     debug_pagealloc_enabled());
 
 	/*
-	 * Map the linear alias of the [_text, __init_begin) interval as
-	 * read-only/non-executable. This makes the contents of the
-	 * region accessible to subsystems such as hibernate, but
-	 * protects it from inadvertent modification or execution.
+	 * Map the linear alias of the [_text, __init_begin) interval
+	 * as non-executable now, and remove the write permission in
+	 * mark_linear_text_alias_ro() below (which will be called after
+	 * alternative patching has completed). This makes the contents
+	 * of the region accessible to subsystems such as hibernate,
+	 * but protects it from inadvertent modification or execution.
 	 */
 	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
-			     kernel_end - kernel_start, PAGE_KERNEL_RO,
+			     kernel_end - kernel_start, PAGE_KERNEL,
 			     early_pgtable_alloc, debug_pagealloc_enabled());
 }
 
+void mark_linear_text_alias_ro(void)
+{
+	/*
+	 * Remove the write permissions from the linear alias of .text/.rodata
+	 */
+	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
+			    (unsigned long)__init_begin - (unsigned long)_text,
+			    PAGE_KERNEL_RO);
+
+	/* flush the TLBs after updating live kernel mappings */
+	flush_tlb_all();
+}
+
 static void __init map_mem(pgd_t *pgd)
 {
 	struct memblock_region *reg;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/4] arm64: mmu: map .text as read-only from the outset
  2017-02-10 17:16 ` Ard Biesheuvel
  (?)
@ 2017-02-10 17:16   ` Ard Biesheuvel
  -1 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: Ard Biesheuvel, marc.zyngier, andre.przywara, kernel-hardening, kvmarm

Now that alternatives patching code no longer relies on the primary
mapping of .text being writable, we can remove the code that removes
the writable permissions post-init time, and map it read-only from
the outset.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f4b045d1cc53..5b0dbb9156ce 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -442,9 +442,6 @@ void mark_rodata_ro(void)
 {
 	unsigned long section_size;
 
-	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
-			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
@@ -487,7 +484,7 @@ static void __init map_kernel(pgd_t *pgd)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
 
-	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
+	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
 	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
 	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
 			   &vmlinux_init);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/4] arm64: mmu: map .text as read-only from the outset
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

Now that alternatives patching code no longer relies on the primary
mapping of .text being writable, we can remove the code that removes
the writable permissions post-init time, and map it read-only from
the outset.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f4b045d1cc53..5b0dbb9156ce 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -442,9 +442,6 @@ void mark_rodata_ro(void)
 {
 	unsigned long section_size;
 
-	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
-			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
@@ -487,7 +484,7 @@ static void __init map_kernel(pgd_t *pgd)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
 
-	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
+	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
 	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
 	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
 			   &vmlinux_init);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCH 3/4] arm64: mmu: map .text as read-only from the outset
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening,
	andre.przywara, Ard Biesheuvel

Now that alternatives patching code no longer relies on the primary
mapping of .text being writable, we can remove the code that removes
the writable permissions post-init time, and map it read-only from
the outset.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f4b045d1cc53..5b0dbb9156ce 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -442,9 +442,6 @@ void mark_rodata_ro(void)
 {
 	unsigned long section_size;
 
-	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
-			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
@@ -487,7 +484,7 @@ static void __init map_kernel(pgd_t *pgd)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
 
-	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
+	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
 	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
 	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
 			   &vmlinux_init);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data
  2017-02-10 17:16 ` Ard Biesheuvel
  (?)
@ 2017-02-10 17:16   ` Ard Biesheuvel
  -1 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: Ard Biesheuvel, marc.zyngier, andre.przywara, kernel-hardening, kvmarm

To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.

This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).

Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region. The .rela section does not have to be
mapped at all after applying the relocations, so drop that from the init
mapping entirely.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++++++------
 arch/arm64/mm/init.c              |  3 +-
 arch/arm64/mm/mmu.c               | 12 +++++---
 4 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index 4e7e7067afdb..22582819b2e5 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -24,7 +24,8 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
 extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
+extern char __initdata_begin[], __initdata_end[];
+extern char __inittext_begin[], __inittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
-
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index b8deffa9e1bf..fa144d16bc91 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -143,12 +143,27 @@ SECTIONS
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__init_begin = .;
+	__inittext_begin = .;
 
 	INIT_TEXT_SECTION(8)
 	.exit.text : {
 		ARM_EXIT_KEEP(EXIT_TEXT)
 	}
 
+	. = ALIGN(4);
+	.altinstructions : {
+		__alt_instructions = .;
+		*(.altinstructions)
+		__alt_instructions_end = .;
+	}
+	.altinstr_replacement : {
+		*(.altinstr_replacement)
+	}
+
+	. = ALIGN(PAGE_SIZE);
+	__inittext_end = .;
+	__initdata_begin = .;
+
 	.init.data : {
 		INIT_DATA
 		INIT_SETUP(16)
@@ -164,15 +179,14 @@ SECTIONS
 
 	PERCPU_SECTION(L1_CACHE_BYTES)
 
-	. = ALIGN(4);
-	.altinstructions : {
-		__alt_instructions = .;
-		*(.altinstructions)
-		__alt_instructions_end = .;
-	}
-	.altinstr_replacement : {
-		*(.altinstr_replacement)
-	}
+	. = ALIGN(PAGE_SIZE);
+	__initdata_end = .;
+
+	/*
+	 * The .rela section is not covered by __inittext or __initdata since
+	 * there is no reason to keep it mapped when we switch to the permanent
+	 * swapper page tables.
+	 */
 	.rela : ALIGN(8) {
 		*(.rela .rela*)
 	}
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8a2713018f2f..6a55feaf46c8 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -493,7 +493,8 @@ void free_initmem(void)
 	 * prevents the region from being reused for kernel modules, which
 	 * is not supported by kallsyms.
 	 */
-	unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin));
+	unmap_kernel_range((u64)__inittext_begin,
+			   (u64)(__initdata_end - __inittext_begin));
 }
 
 #ifdef CONFIG_BLK_DEV_INITRD
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5b0dbb9156ce..e6a4bf2acd59 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -482,12 +482,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
  */
 static void __init map_kernel(pgd_t *pgd)
 {
-	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
+	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
+				vmlinux_initdata, vmlinux_data;
 
 	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
-	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
-	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
-			   &vmlinux_init);
+	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
+			   &vmlinux_rodata);
+	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
+			   &vmlinux_inittext);
+	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
+			   &vmlinux_initdata);
 	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
 
 	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel

To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.

This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).

Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region. The .rela section does not have to be
mapped at all after applying the relocations, so drop that from the init
mapping entirely.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++++++------
 arch/arm64/mm/init.c              |  3 +-
 arch/arm64/mm/mmu.c               | 12 +++++---
 4 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index 4e7e7067afdb..22582819b2e5 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -24,7 +24,8 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
 extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
+extern char __initdata_begin[], __initdata_end[];
+extern char __inittext_begin[], __inittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
-
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index b8deffa9e1bf..fa144d16bc91 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -143,12 +143,27 @@ SECTIONS
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__init_begin = .;
+	__inittext_begin = .;
 
 	INIT_TEXT_SECTION(8)
 	.exit.text : {
 		ARM_EXIT_KEEP(EXIT_TEXT)
 	}
 
+	. = ALIGN(4);
+	.altinstructions : {
+		__alt_instructions = .;
+		*(.altinstructions)
+		__alt_instructions_end = .;
+	}
+	.altinstr_replacement : {
+		*(.altinstr_replacement)
+	}
+
+	. = ALIGN(PAGE_SIZE);
+	__inittext_end = .;
+	__initdata_begin = .;
+
 	.init.data : {
 		INIT_DATA
 		INIT_SETUP(16)
@@ -164,15 +179,14 @@ SECTIONS
 
 	PERCPU_SECTION(L1_CACHE_BYTES)
 
-	. = ALIGN(4);
-	.altinstructions : {
-		__alt_instructions = .;
-		*(.altinstructions)
-		__alt_instructions_end = .;
-	}
-	.altinstr_replacement : {
-		*(.altinstr_replacement)
-	}
+	. = ALIGN(PAGE_SIZE);
+	__initdata_end = .;
+
+	/*
+	 * The .rela section is not covered by __inittext or __initdata since
+	 * there is no reason to keep it mapped when we switch to the permanent
+	 * swapper page tables.
+	 */
 	.rela : ALIGN(8) {
 		*(.rela .rela*)
 	}
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8a2713018f2f..6a55feaf46c8 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -493,7 +493,8 @@ void free_initmem(void)
 	 * prevents the region from being reused for kernel modules, which
 	 * is not supported by kallsyms.
 	 */
-	unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin));
+	unmap_kernel_range((u64)__inittext_begin,
+			   (u64)(__initdata_end - __inittext_begin));
 }
 
 #ifdef CONFIG_BLK_DEV_INITRD
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5b0dbb9156ce..e6a4bf2acd59 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -482,12 +482,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
  */
 static void __init map_kernel(pgd_t *pgd)
 {
-	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
+	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
+				vmlinux_initdata, vmlinux_data;
 
 	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
-	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
-	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
-			   &vmlinux_init);
+	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
+			   &vmlinux_rodata);
+	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
+			   &vmlinux_inittext);
+	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
+			   &vmlinux_initdata);
 	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
 
 	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [kernel-hardening] [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-10 17:16   ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 17:16 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening,
	andre.przywara, Ard Biesheuvel

To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.

This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).

Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region. The .rela section does not have to be
mapped at all after applying the relocations, so drop that from the init
mapping entirely.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++++++------
 arch/arm64/mm/init.c              |  3 +-
 arch/arm64/mm/mmu.c               | 12 +++++---
 4 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index 4e7e7067afdb..22582819b2e5 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -24,7 +24,8 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
 extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
+extern char __initdata_begin[], __initdata_end[];
+extern char __inittext_begin[], __inittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
-
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index b8deffa9e1bf..fa144d16bc91 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -143,12 +143,27 @@ SECTIONS
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__init_begin = .;
+	__inittext_begin = .;
 
 	INIT_TEXT_SECTION(8)
 	.exit.text : {
 		ARM_EXIT_KEEP(EXIT_TEXT)
 	}
 
+	. = ALIGN(4);
+	.altinstructions : {
+		__alt_instructions = .;
+		*(.altinstructions)
+		__alt_instructions_end = .;
+	}
+	.altinstr_replacement : {
+		*(.altinstr_replacement)
+	}
+
+	. = ALIGN(PAGE_SIZE);
+	__inittext_end = .;
+	__initdata_begin = .;
+
 	.init.data : {
 		INIT_DATA
 		INIT_SETUP(16)
@@ -164,15 +179,14 @@ SECTIONS
 
 	PERCPU_SECTION(L1_CACHE_BYTES)
 
-	. = ALIGN(4);
-	.altinstructions : {
-		__alt_instructions = .;
-		*(.altinstructions)
-		__alt_instructions_end = .;
-	}
-	.altinstr_replacement : {
-		*(.altinstr_replacement)
-	}
+	. = ALIGN(PAGE_SIZE);
+	__initdata_end = .;
+
+	/*
+	 * The .rela section is not covered by __inittext or __initdata since
+	 * there is no reason to keep it mapped when we switch to the permanent
+	 * swapper page tables.
+	 */
 	.rela : ALIGN(8) {
 		*(.rela .rela*)
 	}
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 8a2713018f2f..6a55feaf46c8 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -493,7 +493,8 @@ void free_initmem(void)
 	 * prevents the region from being reused for kernel modules, which
 	 * is not supported by kallsyms.
 	 */
-	unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin));
+	unmap_kernel_range((u64)__inittext_begin,
+			   (u64)(__initdata_end - __inittext_begin));
 }
 
 #ifdef CONFIG_BLK_DEV_INITRD
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5b0dbb9156ce..e6a4bf2acd59 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -482,12 +482,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
  */
 static void __init map_kernel(pgd_t *pgd)
 {
-	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
+	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
+				vmlinux_initdata, vmlinux_data;
 
 	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
-	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
-	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
-			   &vmlinux_init);
+	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
+			   &vmlinux_rodata);
+	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
+			   &vmlinux_inittext);
+	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
+			   &vmlinux_initdata);
 	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
 
 	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/4] arm: kvm: move kvm_vgic_global_state out of .text section
  2017-02-10 17:16   ` Ard Biesheuvel
  (?)
@ 2017-02-10 17:26     ` Marc Zyngier
  -1 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2017-02-10 17:26 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, mark.rutland, will.deacon,
	catalin.marinas, keescook, labbott, james.morse
  Cc: andre.przywara, kvmarm, kernel-hardening

On 10/02/17 17:16, Ard Biesheuvel wrote:
> The kvm_vgic_global_state struct contains a static key which is
> written to by jump_label_init() at boot time. So in preparation of
> making .text regions truly (well, almost truly) read-only, mark
> kvm_vgic_global_state __ro_after_init so it moves to the .rodata
> section instead.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  virt/kvm/arm/vgic/vgic.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 6440b56ec90e..2f373455ed4e 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -29,7 +29,9 @@
>  #define DEBUG_SPINLOCK_BUG_ON(p)
>  #endif
>  
> -struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
> +struct vgic_global kvm_vgic_global_state __ro_after_init = {
> +	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
> +};
>  
>  /*
>   * Locking order is always:
> 

I guess that's a nice way to find out if someone is playing with this
structure (which indeed shouldn't change once KVM has initialized).

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/4] arm: kvm: move kvm_vgic_global_state out of .text section
@ 2017-02-10 17:26     ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2017-02-10 17:26 UTC (permalink / raw)
  To: linux-arm-kernel

On 10/02/17 17:16, Ard Biesheuvel wrote:
> The kvm_vgic_global_state struct contains a static key which is
> written to by jump_label_init() at boot time. So in preparation of
> making .text regions truly (well, almost truly) read-only, mark
> kvm_vgic_global_state __ro_after_init so it moves to the .rodata
> section instead.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  virt/kvm/arm/vgic/vgic.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 6440b56ec90e..2f373455ed4e 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -29,7 +29,9 @@
>  #define DEBUG_SPINLOCK_BUG_ON(p)
>  #endif
>  
> -struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
> +struct vgic_global kvm_vgic_global_state __ro_after_init = {
> +	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
> +};
>  
>  /*
>   * Locking order is always:
> 

I guess that's a nice way to find out if someone is playing with this
structure (which indeed shouldn't change once KVM has initialized).

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCH 1/4] arm: kvm: move kvm_vgic_global_state out of .text section
@ 2017-02-10 17:26     ` Marc Zyngier
  0 siblings, 0 replies; 36+ messages in thread
From: Marc Zyngier @ 2017-02-10 17:26 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, mark.rutland, will.deacon,
	catalin.marinas, keescook, labbott, james.morse
  Cc: kvmarm, christoffer.dall, kernel-hardening, andre.przywara

On 10/02/17 17:16, Ard Biesheuvel wrote:
> The kvm_vgic_global_state struct contains a static key which is
> written to by jump_label_init() at boot time. So in preparation of
> making .text regions truly (well, almost truly) read-only, mark
> kvm_vgic_global_state __ro_after_init so it moves to the .rodata
> section instead.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  virt/kvm/arm/vgic/vgic.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 6440b56ec90e..2f373455ed4e 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -29,7 +29,9 @@
>  #define DEBUG_SPINLOCK_BUG_ON(p)
>  #endif
>  
> -struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
> +struct vgic_global kvm_vgic_global_state __ro_after_init = {
> +	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
> +};
>  
>  /*
>   * Locking order is always:
> 

I guess that's a nice way to find out if someone is playing with this
structure (which indeed shouldn't change once KVM has initialized).

Acked-by: Marc Zyngier <marc.zyngier@arm.com>

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 3/4] arm64: mmu: map .text as read-only from the outset
  2017-02-10 17:16   ` Ard Biesheuvel
  (?)
@ 2017-02-10 18:41     ` Kees Cook
  -1 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:41 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, Mark Rutland, Will Deacon, Catalin Marinas,
	Laura Abbott, James Morse, kvmarm, Marc Zyngier,
	Christoffer Dall, kernel-hardening, andre.przywara

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> Now that alternatives patching code no longer relies on the primary
> mapping of .text being writable, we can remove the code that removes
> the writable permissions post-init time, and map it read-only from
> the outset.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Awesome!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/4] arm64: mmu: map .text as read-only from the outset
@ 2017-02-10 18:41     ` Kees Cook
  0 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> Now that alternatives patching code no longer relies on the primary
> mapping of .text being writable, we can remove the code that removes
> the writable permissions post-init time, and map it read-only from
> the outset.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Awesome!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCH 3/4] arm64: mmu: map .text as read-only from the outset
@ 2017-02-10 18:41     ` Kees Cook
  0 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:41 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, Mark Rutland, Will Deacon, Catalin Marinas,
	Laura Abbott, James Morse, kvmarm, Marc Zyngier,
	Christoffer Dall, kernel-hardening, andre.przywara

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> Now that alternatives patching code no longer relies on the primary
> mapping of .text being writable, we can remove the code that removes
> the writable permissions post-init time, and map it read-only from
> the outset.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Awesome!

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data
  2017-02-10 17:16   ` Ard Biesheuvel
  (?)
@ 2017-02-10 18:43     ` Kees Cook
  -1 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:43 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, Mark Rutland, Will Deacon, Catalin Marinas,
	Laura Abbott, James Morse, kvmarm, Marc Zyngier,
	Christoffer Dall, kernel-hardening, andre.przywara

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> To avoid having mappings that are writable and executable at the same
> time, split the init region into a .init.text region that is mapped
> read-only, and a .init.data region that is mapped non-executable.
>
> This is possible now that the alternative patching occurs via the linear
> mapping, and the linear alias of the init region is always mapped writable
> (but never executable).

Er, so, that means kernel text is still basically RWX... you just
write to the linear mapping and execute the kernel mapping. Can't we
make the linear mapping match the kernel mapping permissions?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-10 18:43     ` Kees Cook
  0 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> To avoid having mappings that are writable and executable at the same
> time, split the init region into a .init.text region that is mapped
> read-only, and a .init.data region that is mapped non-executable.
>
> This is possible now that the alternative patching occurs via the linear
> mapping, and the linear alias of the init region is always mapped writable
> (but never executable).

Er, so, that means kernel text is still basically RWX... you just
write to the linear mapping and execute the kernel mapping. Can't we
make the linear mapping match the kernel mapping permissions?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-10 18:43     ` Kees Cook
  0 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:43 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, Mark Rutland, Will Deacon, Catalin Marinas,
	Laura Abbott, James Morse, kvmarm, Marc Zyngier,
	Christoffer Dall, kernel-hardening, andre.przywara

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> To avoid having mappings that are writable and executable at the same
> time, split the init region into a .init.text region that is mapped
> read-only, and a .init.data region that is mapped non-executable.
>
> This is possible now that the alternative patching occurs via the linear
> mapping, and the linear alias of the init region is always mapped writable
> (but never executable).

Er, so, that means kernel text is still basically RWX... you just
write to the linear mapping and execute the kernel mapping. Can't we
make the linear mapping match the kernel mapping permissions?

-Kees

-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
  2017-02-10 17:16   ` Ard Biesheuvel
  (?)
@ 2017-02-10 18:45     ` Kees Cook
  -1 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:45 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, Mark Rutland, Will Deacon, Catalin Marinas,
	Laura Abbott, James Morse, kvmarm, Marc Zyngier,
	Christoffer Dall, kernel-hardening, andre.przywara

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> One important rule of thumb when designing a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
>
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  6 ++---
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>  4 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>                                unsigned long virt, phys_addr_t size,
>                                pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..eacdbcc45630 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>
>                 pr_info_once("patching kernel code\n");
>
> -               origptr = ALT_ORIG_PTR(alt);
> +               origptr = lm_alias(ALT_ORIG_PTR(alt));
>                 replptr = ALT_REPL_PTR(alt);
>                 nr_inst = alt->alt_len / sizeof(insn);
>
> @@ -131,8 +131,8 @@ static void __apply_alternatives(void *alt_region)
>                         *(origptr + i) = cpu_to_le32(insn);
>                 }
>
> -               flush_icache_range((uintptr_t)origptr,
> -                                  (uintptr_t)(origptr + nr_inst));
> +               flush_icache_range((uintptr_t)ALT_ORIG_PTR(alt),
> +                                  (uintptr_t)(ALT_ORIG_PTR(alt) + nr_inst));
>         }
>  }
>
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index a8ec5da530af..d6307e311a10 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>         setup_cpu_features();
>         hyp_mode_check();
>         apply_alternatives_all();
> +       mark_linear_text_alias_ro();
>  }
>
>  void __init smp_prepare_boot_cpu(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2131521ddc24..f4b045d1cc53 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -395,16 +395,31 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
>                                      debug_pagealloc_enabled());
>
>         /*
> -        * Map the linear alias of the [_text, __init_begin) interval as
> -        * read-only/non-executable. This makes the contents of the
> -        * region accessible to subsystems such as hibernate, but
> -        * protects it from inadvertent modification or execution.
> +        * Map the linear alias of the [_text, __init_begin) interval
> +        * as non-executable now, and remove the write permission in
> +        * mark_linear_text_alias_ro() below (which will be called after
> +        * alternative patching has completed). This makes the contents
> +        * of the region accessible to subsystems such as hibernate,
> +        * but protects it from inadvertent modification or execution.
>          */
>         __create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
> -                            kernel_end - kernel_start, PAGE_KERNEL_RO,
> +                            kernel_end - kernel_start, PAGE_KERNEL,
>                              early_pgtable_alloc, debug_pagealloc_enabled());
>  }
>
> +void mark_linear_text_alias_ro(void)
> +{
> +       /*
> +        * Remove the write permissions from the linear alias of .text/.rodata
> +        */
> +       create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
> +                           (unsigned long)__init_begin - (unsigned long)_text,
> +                           PAGE_KERNEL_RO);
> +
> +       /* flush the TLBs after updating live kernel mappings */
> +       flush_tlb_all();
> +}

Oh, dur, sorry, I missed this the first time through. Nevermind! Looks
like linear will be RO, kernel will be ROX at runtime, and during
boot, kernel will be ROX, and linear will be RW. Nice!

-Kees

> +
>  static void __init map_mem(pgd_t *pgd)
>  {
>         struct memblock_region *reg;
> --
> 2.7.4
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 18:45     ` Kees Cook
  0 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> One important rule of thumb when designing a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
>
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  6 ++---
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>  4 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>                                unsigned long virt, phys_addr_t size,
>                                pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..eacdbcc45630 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>
>                 pr_info_once("patching kernel code\n");
>
> -               origptr = ALT_ORIG_PTR(alt);
> +               origptr = lm_alias(ALT_ORIG_PTR(alt));
>                 replptr = ALT_REPL_PTR(alt);
>                 nr_inst = alt->alt_len / sizeof(insn);
>
> @@ -131,8 +131,8 @@ static void __apply_alternatives(void *alt_region)
>                         *(origptr + i) = cpu_to_le32(insn);
>                 }
>
> -               flush_icache_range((uintptr_t)origptr,
> -                                  (uintptr_t)(origptr + nr_inst));
> +               flush_icache_range((uintptr_t)ALT_ORIG_PTR(alt),
> +                                  (uintptr_t)(ALT_ORIG_PTR(alt) + nr_inst));
>         }
>  }
>
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index a8ec5da530af..d6307e311a10 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>         setup_cpu_features();
>         hyp_mode_check();
>         apply_alternatives_all();
> +       mark_linear_text_alias_ro();
>  }
>
>  void __init smp_prepare_boot_cpu(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2131521ddc24..f4b045d1cc53 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -395,16 +395,31 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
>                                      debug_pagealloc_enabled());
>
>         /*
> -        * Map the linear alias of the [_text, __init_begin) interval as
> -        * read-only/non-executable. This makes the contents of the
> -        * region accessible to subsystems such as hibernate, but
> -        * protects it from inadvertent modification or execution.
> +        * Map the linear alias of the [_text, __init_begin) interval
> +        * as non-executable now, and remove the write permission in
> +        * mark_linear_text_alias_ro() below (which will be called after
> +        * alternative patching has completed). This makes the contents
> +        * of the region accessible to subsystems such as hibernate,
> +        * but protects it from inadvertent modification or execution.
>          */
>         __create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
> -                            kernel_end - kernel_start, PAGE_KERNEL_RO,
> +                            kernel_end - kernel_start, PAGE_KERNEL,
>                              early_pgtable_alloc, debug_pagealloc_enabled());
>  }
>
> +void mark_linear_text_alias_ro(void)
> +{
> +       /*
> +        * Remove the write permissions from the linear alias of .text/.rodata
> +        */
> +       create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
> +                           (unsigned long)__init_begin - (unsigned long)_text,
> +                           PAGE_KERNEL_RO);
> +
> +       /* flush the TLBs after updating live kernel mappings */
> +       flush_tlb_all();
> +}

Oh, dur, sorry, I missed this the first time through. Nevermind! Looks
like linear will be RO, kernel will be ROX at runtime, and during
boot, kernel will be ROX, and linear will be RW. Nice!

-Kees

> +
>  static void __init map_mem(pgd_t *pgd)
>  {
>         struct memblock_region *reg;
> --
> 2.7.4
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 18:45     ` Kees Cook
  0 siblings, 0 replies; 36+ messages in thread
From: Kees Cook @ 2017-02-10 18:45 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, Mark Rutland, Will Deacon, Catalin Marinas,
	Laura Abbott, James Morse, kvmarm, Marc Zyngier,
	Christoffer Dall, kernel-hardening, andre.przywara

On Fri, Feb 10, 2017 at 9:16 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> One important rule of thumb when designing a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
>
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  6 ++---
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>  4 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>                                unsigned long virt, phys_addr_t size,
>                                pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..eacdbcc45630 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>
>                 pr_info_once("patching kernel code\n");
>
> -               origptr = ALT_ORIG_PTR(alt);
> +               origptr = lm_alias(ALT_ORIG_PTR(alt));
>                 replptr = ALT_REPL_PTR(alt);
>                 nr_inst = alt->alt_len / sizeof(insn);
>
> @@ -131,8 +131,8 @@ static void __apply_alternatives(void *alt_region)
>                         *(origptr + i) = cpu_to_le32(insn);
>                 }
>
> -               flush_icache_range((uintptr_t)origptr,
> -                                  (uintptr_t)(origptr + nr_inst));
> +               flush_icache_range((uintptr_t)ALT_ORIG_PTR(alt),
> +                                  (uintptr_t)(ALT_ORIG_PTR(alt) + nr_inst));
>         }
>  }
>
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index a8ec5da530af..d6307e311a10 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>         setup_cpu_features();
>         hyp_mode_check();
>         apply_alternatives_all();
> +       mark_linear_text_alias_ro();
>  }
>
>  void __init smp_prepare_boot_cpu(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2131521ddc24..f4b045d1cc53 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -395,16 +395,31 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
>                                      debug_pagealloc_enabled());
>
>         /*
> -        * Map the linear alias of the [_text, __init_begin) interval as
> -        * read-only/non-executable. This makes the contents of the
> -        * region accessible to subsystems such as hibernate, but
> -        * protects it from inadvertent modification or execution.
> +        * Map the linear alias of the [_text, __init_begin) interval
> +        * as non-executable now, and remove the write permission in
> +        * mark_linear_text_alias_ro() below (which will be called after
> +        * alternative patching has completed). This makes the contents
> +        * of the region accessible to subsystems such as hibernate,
> +        * but protects it from inadvertent modification or execution.
>          */
>         __create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
> -                            kernel_end - kernel_start, PAGE_KERNEL_RO,
> +                            kernel_end - kernel_start, PAGE_KERNEL,
>                              early_pgtable_alloc, debug_pagealloc_enabled());
>  }
>
> +void mark_linear_text_alias_ro(void)
> +{
> +       /*
> +        * Remove the write permissions from the linear alias of .text/.rodata
> +        */
> +       create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
> +                           (unsigned long)__init_begin - (unsigned long)_text,
> +                           PAGE_KERNEL_RO);
> +
> +       /* flush the TLBs after updating live kernel mappings */
> +       flush_tlb_all();
> +}

Oh, dur, sorry, I missed this the first time through. Nevermind! Looks
like linear will be RO, kernel will be ROX at runtime, and during
boot, kernel will be ROX, and linear will be RW. Nice!

-Kees

> +
>  static void __init map_mem(pgd_t *pgd)
>  {
>         struct memblock_region *reg;
> --
> 2.7.4
>



-- 
Kees Cook
Pixel Security

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
  2017-02-10 17:16   ` Ard Biesheuvel
  (?)
@ 2017-02-10 18:49     ` Suzuki K Poulose
  -1 siblings, 0 replies; 36+ messages in thread
From: Suzuki K Poulose @ 2017-02-10 18:49 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, mark.rutland, will.deacon,
	catalin.marinas, keescook, labbott, james.morse
  Cc: marc.zyngier, andre.przywara, kvmarm, kernel-hardening

On 10/02/17 17:16, Ard Biesheuvel wrote:
> One important rule of thumb when designing a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
>
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  6 ++---
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>  4 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       unsigned long virt, phys_addr_t size,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..eacdbcc45630 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>
>  		pr_info_once("patching kernel code\n");
>
> -		origptr = ALT_ORIG_PTR(alt);
> +		origptr = lm_alias(ALT_ORIG_PTR(alt));
>  		replptr = ALT_REPL_PTR(alt);
>  		nr_inst = alt->alt_len / sizeof(insn);

Correct me if I am wrong, I think this would make "get_alt_insn" generate branch
instructions based on the  aliased linear mapped address, which could branch to linear
address of the branch target which doesn't have Execute permissions set.
I think we sould use ALT_ORIG_PTR(alt), instead of origptr for the calls to
get_alt_insn().

Suzuki

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 18:49     ` Suzuki K Poulose
  0 siblings, 0 replies; 36+ messages in thread
From: Suzuki K Poulose @ 2017-02-10 18:49 UTC (permalink / raw)
  To: linux-arm-kernel

On 10/02/17 17:16, Ard Biesheuvel wrote:
> One important rule of thumb when designing a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
>
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  6 ++---
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>  4 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       unsigned long virt, phys_addr_t size,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..eacdbcc45630 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>
>  		pr_info_once("patching kernel code\n");
>
> -		origptr = ALT_ORIG_PTR(alt);
> +		origptr = lm_alias(ALT_ORIG_PTR(alt));
>  		replptr = ALT_REPL_PTR(alt);
>  		nr_inst = alt->alt_len / sizeof(insn);

Correct me if I am wrong, I think this would make "get_alt_insn" generate branch
instructions based on the  aliased linear mapped address, which could branch to linear
address of the branch target which doesn't have Execute permissions set.
I think we sould use ALT_ORIG_PTR(alt), instead of origptr for the calls to
get_alt_insn().

Suzuki

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 18:49     ` Suzuki K Poulose
  0 siblings, 0 replies; 36+ messages in thread
From: Suzuki K Poulose @ 2017-02-10 18:49 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, mark.rutland, will.deacon,
	catalin.marinas, keescook, labbott, james.morse
  Cc: marc.zyngier, andre.przywara, kernel-hardening, kvmarm

On 10/02/17 17:16, Ard Biesheuvel wrote:
> One important rule of thumb when designing a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
>
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  6 ++---
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>  4 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       unsigned long virt, phys_addr_t size,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..eacdbcc45630 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>
>  		pr_info_once("patching kernel code\n");
>
> -		origptr = ALT_ORIG_PTR(alt);
> +		origptr = lm_alias(ALT_ORIG_PTR(alt));
>  		replptr = ALT_REPL_PTR(alt);
>  		nr_inst = alt->alt_len / sizeof(insn);

Correct me if I am wrong, I think this would make "get_alt_insn" generate branch
instructions based on the  aliased linear mapped address, which could branch to linear
address of the branch target which doesn't have Execute permissions set.
I think we sould use ALT_ORIG_PTR(alt), instead of origptr for the calls to
get_alt_insn().

Suzuki

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
  2017-02-10 18:49     ` Suzuki K Poulose
  (?)
@ 2017-02-10 18:53       ` Ard Biesheuvel
  -1 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 18:53 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: keescook, kernel-hardening, marc.zyngier, catalin.marinas,
	will.deacon, andre.przywara, kvmarm, linux-arm-kernel, labbott


> On 10 Feb 2017, at 18:49, Suzuki K Poulose <Suzuki.Poulose@arm.com> wrote:
> 
>> On 10/02/17 17:16, Ard Biesheuvel wrote:
>> One important rule of thumb when designing a secure software system is
>> that memory should never be writable and executable at the same time.
>> We mostly adhere to this rule in the kernel, except at boot time, when
>> regions may be mapped RWX until after we are done applying alternatives
>> or making other one-off changes.
>> 
>> For the alternative patching, we can improve the situation by applying
>> the fixups via the linear mapping, which is never mapped with executable
>> permissions. So map the linear alias of .text with RW- permissions
>> initially, and remove the write permissions as soon as alternative
>> patching has completed.
>> 
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> arch/arm64/include/asm/mmu.h    |  1 +
>> arch/arm64/kernel/alternative.c |  6 ++---
>> arch/arm64/kernel/smp.c         |  1 +
>> arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>> 4 files changed, 25 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 47619411f0ff..5468c834b072 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>                   unsigned long virt, phys_addr_t size,
>>                   pgprot_t prot, bool page_mappings_only);
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
>> +extern void mark_linear_text_alias_ro(void);
>> 
>> #endif
>> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
>> index 06d650f61da7..eacdbcc45630 100644
>> --- a/arch/arm64/kernel/alternative.c
>> +++ b/arch/arm64/kernel/alternative.c
>> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>> 
>>        pr_info_once("patching kernel code\n");
>> 
>> -        origptr = ALT_ORIG_PTR(alt);
>> +        origptr = lm_alias(ALT_ORIG_PTR(alt));
>>        replptr = ALT_REPL_PTR(alt);
>>        nr_inst = alt->alt_len / sizeof(insn);
> 
> Correct me if I am wrong, I think this would make "get_alt_insn" generate branch
> instructions based on the  aliased linear mapped address, which could branch to linear
> address of the branch target which doesn't have Execute permissions set.
> I think we sould use ALT_ORIG_PTR(alt), instead of origptr for the calls to
> get_alt_insn().
> 

Good point, you are probably right.

Will fix

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 18:53       ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 18:53 UTC (permalink / raw)
  To: linux-arm-kernel


> On 10 Feb 2017, at 18:49, Suzuki K Poulose <Suzuki.Poulose@arm.com> wrote:
> 
>> On 10/02/17 17:16, Ard Biesheuvel wrote:
>> One important rule of thumb when designing a secure software system is
>> that memory should never be writable and executable at the same time.
>> We mostly adhere to this rule in the kernel, except at boot time, when
>> regions may be mapped RWX until after we are done applying alternatives
>> or making other one-off changes.
>> 
>> For the alternative patching, we can improve the situation by applying
>> the fixups via the linear mapping, which is never mapped with executable
>> permissions. So map the linear alias of .text with RW- permissions
>> initially, and remove the write permissions as soon as alternative
>> patching has completed.
>> 
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> arch/arm64/include/asm/mmu.h    |  1 +
>> arch/arm64/kernel/alternative.c |  6 ++---
>> arch/arm64/kernel/smp.c         |  1 +
>> arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>> 4 files changed, 25 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 47619411f0ff..5468c834b072 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>                   unsigned long virt, phys_addr_t size,
>>                   pgprot_t prot, bool page_mappings_only);
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
>> +extern void mark_linear_text_alias_ro(void);
>> 
>> #endif
>> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
>> index 06d650f61da7..eacdbcc45630 100644
>> --- a/arch/arm64/kernel/alternative.c
>> +++ b/arch/arm64/kernel/alternative.c
>> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>> 
>>        pr_info_once("patching kernel code\n");
>> 
>> -        origptr = ALT_ORIG_PTR(alt);
>> +        origptr = lm_alias(ALT_ORIG_PTR(alt));
>>        replptr = ALT_REPL_PTR(alt);
>>        nr_inst = alt->alt_len / sizeof(insn);
> 
> Correct me if I am wrong, I think this would make "get_alt_insn" generate branch
> instructions based on the  aliased linear mapped address, which could branch to linear
> address of the branch target which doesn't have Execute permissions set.
> I think we sould use ALT_ORIG_PTR(alt), instead of origptr for the calls to
> get_alt_insn().
> 

Good point, you are probably right.

Will fix

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-10 18:53       ` Ard Biesheuvel
  0 siblings, 0 replies; 36+ messages in thread
From: Ard Biesheuvel @ 2017-02-10 18:53 UTC (permalink / raw)
  To: Suzuki K Poulose
  Cc: linux-arm-kernel, mark.rutland, will.deacon, catalin.marinas,
	keescook, labbott, james.morse, marc.zyngier, andre.przywara,
	kernel-hardening, kvmarm


> On 10 Feb 2017, at 18:49, Suzuki K Poulose <Suzuki.Poulose@arm.com> wrote:
> 
>> On 10/02/17 17:16, Ard Biesheuvel wrote:
>> One important rule of thumb when designing a secure software system is
>> that memory should never be writable and executable at the same time.
>> We mostly adhere to this rule in the kernel, except at boot time, when
>> regions may be mapped RWX until after we are done applying alternatives
>> or making other one-off changes.
>> 
>> For the alternative patching, we can improve the situation by applying
>> the fixups via the linear mapping, which is never mapped with executable
>> permissions. So map the linear alias of .text with RW- permissions
>> initially, and remove the write permissions as soon as alternative
>> patching has completed.
>> 
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> arch/arm64/include/asm/mmu.h    |  1 +
>> arch/arm64/kernel/alternative.c |  6 ++---
>> arch/arm64/kernel/smp.c         |  1 +
>> arch/arm64/mm/mmu.c             | 25 ++++++++++++++++----
>> 4 files changed, 25 insertions(+), 8 deletions(-)
>> 
>> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
>> index 47619411f0ff..5468c834b072 100644
>> --- a/arch/arm64/include/asm/mmu.h
>> +++ b/arch/arm64/include/asm/mmu.h
>> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>>                   unsigned long virt, phys_addr_t size,
>>                   pgprot_t prot, bool page_mappings_only);
>> extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
>> +extern void mark_linear_text_alias_ro(void);
>> 
>> #endif
>> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
>> index 06d650f61da7..eacdbcc45630 100644
>> --- a/arch/arm64/kernel/alternative.c
>> +++ b/arch/arm64/kernel/alternative.c
>> @@ -122,7 +122,7 @@ static void __apply_alternatives(void *alt_region)
>> 
>>        pr_info_once("patching kernel code\n");
>> 
>> -        origptr = ALT_ORIG_PTR(alt);
>> +        origptr = lm_alias(ALT_ORIG_PTR(alt));
>>        replptr = ALT_REPL_PTR(alt);
>>        nr_inst = alt->alt_len / sizeof(insn);
> 
> Correct me if I am wrong, I think this would make "get_alt_insn" generate branch
> instructions based on the  aliased linear mapped address, which could branch to linear
> address of the branch target which doesn't have Execute permissions set.
> I think we sould use ALT_ORIG_PTR(alt), instead of origptr for the calls to
> get_alt_insn().
> 

Good point, you are probably right.

Will fix

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings
  2017-02-10 17:16 ` Ard Biesheuvel
  (?)
@ 2017-02-10 21:42   ` Laura Abbott
  -1 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2017-02-10 21:42 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, mark.rutland, will.deacon,
	catalin.marinas, keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening, andre.przywara

On 02/10/2017 09:16 AM, Ard Biesheuvel wrote:
> Having memory that is writable and executable at the same time is a
> security hazard, and so we tend to avoid those when we can. However,
> at boot time, we keep .text mapped writable during the entire init
> phase, and the init region itself is mapped rwx as well.
> 
> Let's improve the situation by:
> - making the alternatives patching use the linear mapping
> - splitting the init region into separate text and data regions
> 
> This removes all RWX mappings except the really early one created
> in head.S (which we could perhaps fix in the future as well)
> 
> Ard Biesheuvel (4):
>   arm: kvm: move kvm_vgic_global_state out of .text section
>   arm64: alternatives: apply boot time fixups via the linear mapping
>   arm64: mmu: map .text as read-only from the outset
>   arm64: mmu: apply strict permissions to .init.text and .init.data
> 
>  arch/arm64/include/asm/mmu.h      |  1 +
>  arch/arm64/include/asm/sections.h |  3 +-
>  arch/arm64/kernel/alternative.c   |  6 +--
>  arch/arm64/kernel/smp.c           |  1 +
>  arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++-----
>  arch/arm64/mm/init.c              |  3 +-
>  arch/arm64/mm/mmu.c               | 42 ++++++++++++++------
>  virt/kvm/arm/vgic/vgic.c          |  4 +-
>  8 files changed, 64 insertions(+), 28 deletions(-)
> 

Reviewed-by: Laura Abbott <labbott@redhat.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-10 21:42   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2017-02-10 21:42 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/10/2017 09:16 AM, Ard Biesheuvel wrote:
> Having memory that is writable and executable at the same time is a
> security hazard, and so we tend to avoid those when we can. However,
> at boot time, we keep .text mapped writable during the entire init
> phase, and the init region itself is mapped rwx as well.
> 
> Let's improve the situation by:
> - making the alternatives patching use the linear mapping
> - splitting the init region into separate text and data regions
> 
> This removes all RWX mappings except the really early one created
> in head.S (which we could perhaps fix in the future as well)
> 
> Ard Biesheuvel (4):
>   arm: kvm: move kvm_vgic_global_state out of .text section
>   arm64: alternatives: apply boot time fixups via the linear mapping
>   arm64: mmu: map .text as read-only from the outset
>   arm64: mmu: apply strict permissions to .init.text and .init.data
> 
>  arch/arm64/include/asm/mmu.h      |  1 +
>  arch/arm64/include/asm/sections.h |  3 +-
>  arch/arm64/kernel/alternative.c   |  6 +--
>  arch/arm64/kernel/smp.c           |  1 +
>  arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++-----
>  arch/arm64/mm/init.c              |  3 +-
>  arch/arm64/mm/mmu.c               | 42 ++++++++++++++------
>  virt/kvm/arm/vgic/vgic.c          |  4 +-
>  8 files changed, 64 insertions(+), 28 deletions(-)
> 

Reviewed-by: Laura Abbott <labbott@redhat.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [kernel-hardening] Re: [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-10 21:42   ` Laura Abbott
  0 siblings, 0 replies; 36+ messages in thread
From: Laura Abbott @ 2017-02-10 21:42 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel, mark.rutland, will.deacon,
	catalin.marinas, keescook, labbott, james.morse
  Cc: kvmarm, marc.zyngier, christoffer.dall, kernel-hardening, andre.przywara

On 02/10/2017 09:16 AM, Ard Biesheuvel wrote:
> Having memory that is writable and executable at the same time is a
> security hazard, and so we tend to avoid those when we can. However,
> at boot time, we keep .text mapped writable during the entire init
> phase, and the init region itself is mapped rwx as well.
> 
> Let's improve the situation by:
> - making the alternatives patching use the linear mapping
> - splitting the init region into separate text and data regions
> 
> This removes all RWX mappings except the really early one created
> in head.S (which we could perhaps fix in the future as well)
> 
> Ard Biesheuvel (4):
>   arm: kvm: move kvm_vgic_global_state out of .text section
>   arm64: alternatives: apply boot time fixups via the linear mapping
>   arm64: mmu: map .text as read-only from the outset
>   arm64: mmu: apply strict permissions to .init.text and .init.data
> 
>  arch/arm64/include/asm/mmu.h      |  1 +
>  arch/arm64/include/asm/sections.h |  3 +-
>  arch/arm64/kernel/alternative.c   |  6 +--
>  arch/arm64/kernel/smp.c           |  1 +
>  arch/arm64/kernel/vmlinux.lds.S   | 32 ++++++++++-----
>  arch/arm64/mm/init.c              |  3 +-
>  arch/arm64/mm/mmu.c               | 42 ++++++++++++++------
>  virt/kvm/arm/vgic/vgic.c          |  4 +-
>  8 files changed, 64 insertions(+), 28 deletions(-)
> 

Reviewed-by: Laura Abbott <labbott@redhat.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2017-02-10 21:42 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-10 17:16 [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings Ard Biesheuvel
2017-02-10 17:16 ` [kernel-hardening] " Ard Biesheuvel
2017-02-10 17:16 ` Ard Biesheuvel
2017-02-10 17:16 ` [PATCH 1/4] arm: kvm: move kvm_vgic_global_state out of .text section Ard Biesheuvel
2017-02-10 17:16   ` [kernel-hardening] " Ard Biesheuvel
2017-02-10 17:16   ` Ard Biesheuvel
2017-02-10 17:26   ` Marc Zyngier
2017-02-10 17:26     ` [kernel-hardening] " Marc Zyngier
2017-02-10 17:26     ` Marc Zyngier
2017-02-10 17:16 ` [PATCH 2/4] arm64: alternatives: apply boot time fixups via the linear mapping Ard Biesheuvel
2017-02-10 17:16   ` [kernel-hardening] " Ard Biesheuvel
2017-02-10 17:16   ` Ard Biesheuvel
2017-02-10 18:45   ` Kees Cook
2017-02-10 18:45     ` [kernel-hardening] " Kees Cook
2017-02-10 18:45     ` Kees Cook
2017-02-10 18:49   ` Suzuki K Poulose
2017-02-10 18:49     ` [kernel-hardening] " Suzuki K Poulose
2017-02-10 18:49     ` Suzuki K Poulose
2017-02-10 18:53     ` Ard Biesheuvel
2017-02-10 18:53       ` [kernel-hardening] " Ard Biesheuvel
2017-02-10 18:53       ` Ard Biesheuvel
2017-02-10 17:16 ` [PATCH 3/4] arm64: mmu: map .text as read-only from the outset Ard Biesheuvel
2017-02-10 17:16   ` [kernel-hardening] " Ard Biesheuvel
2017-02-10 17:16   ` Ard Biesheuvel
2017-02-10 18:41   ` Kees Cook
2017-02-10 18:41     ` [kernel-hardening] " Kees Cook
2017-02-10 18:41     ` Kees Cook
2017-02-10 17:16 ` [PATCH 4/4] arm64: mmu: apply strict permissions to .init.text and .init.data Ard Biesheuvel
2017-02-10 17:16   ` [kernel-hardening] " Ard Biesheuvel
2017-02-10 17:16   ` Ard Biesheuvel
2017-02-10 18:43   ` Kees Cook
2017-02-10 18:43     ` [kernel-hardening] " Kees Cook
2017-02-10 18:43     ` Kees Cook
2017-02-10 21:42 ` [PATCH 0/4] arm64: mmu: avoid writeable-executable mappings Laura Abbott
2017-02-10 21:42   ` [kernel-hardening] " Laura Abbott
2017-02-10 21:42   ` Laura Abbott

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.