All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-11 20:23 ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: Ard Biesheuvel, keescook, marc.zyngier, andre.przywara,
	kernel-hardening, kvmarm

Having memory that is writable and executable at the same time is a
security hazard, and so we tend to avoid those when we can. However,
at boot time, we keep .text mapped writable during the entire init
phase, and the init region itself is mapped rwx as well.

Let's improve the situation by:
- making the alternatives patching use the linear mapping
- splitting the init region into separate text and data regions

This removes all RWX mappings except the really early one created
in head.S (which we could perhaps fix in the future as well)

Changes since v1:
- add patch to move TLB maintenance into create_mapping_late() and remove it
  from its callers (#2)
- use the true address not the linear alias when patching branch instructions,
  spotted by Suzuki (#3)
- mark mark_linear_text_alias_ro() __init (#3)
- move the .rela section back into __initdata: as it turns out, leaving a hole
  between the segments results in a peculiar situation where other unrelated
  allocations end up right in the middle of the kernel Image, which is
  probably a bad idea (#5). See below for an example.
- add acks

Ard Biesheuvel (5):
  arm: kvm: move kvm_vgic_global_state out of .text section
  arm64: mmu: move TLB maintenance from callers to create_mapping_late()
  arm64: alternatives: apply boot time fixups via the linear mapping
  arm64: mmu: map .text as read-only from the outset
  arm64: mmu: apply strict permissions to .init.text and .init.data

 arch/arm64/include/asm/mmu.h      |  1 +
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/alternative.c   |  2 +-
 arch/arm64/kernel/smp.c           |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 25 +++++++----
 arch/arm64/mm/mmu.c               | 45 +++++++++++++-------
 virt/kvm/arm/vgic/vgic.c          |  4 +-
 7 files changed, 53 insertions(+), 28 deletions(-)

-- 
2.7.4

The various kernel segments are vmapped from paging_init() [after inlining]

0xffffff8008080000-0xffffff80088b0000 8585216 paging_init+0x84/0x584 phys=40080000 vmap
0xffffff80088b0000-0xffffff8008cb0000 4194304 paging_init+0xa4/0x584 phys=408b0000 vmap
0xffffff8008cb0000-0xffffff8008d27000  487424 paging_init+0xc4/0x584 phys=40cb0000 vmap
0xffffff8008d27000-0xffffff8008da3000  507904 paging_init+0xe8/0x584 phys=40d27000 vmap
0xffffff8008dd1000-0xffffff8008dd3000    8192 devm_ioremap_nocache+0x54/0xa8 phys=a003000 ioremap
0xffffff8008dd3000-0xffffff8008dd5000    8192 devm_ioremap_nocache+0x54/0xa8 phys=a003000 ioremap
0xffffff8008dde000-0xffffff8008de0000    8192 pl031_probe+0x80/0x1e8 phys=9010000 ioremap
0xffffff8008e4c000-0xffffff8008e50000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e54000-0xffffff8008e58000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e80000-0xffffff8008e84000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e84000-0xffffff8008e88000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008ea0000-0xffffff8008ea2000    8192 bpf_prog_alloc+0x3c/0xb8 pages=1 vmalloc
0xffffff8008ef2000-0xffffff8008ef6000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008ef6000-0xffffff8008efa000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8009010000-0xffffff800914b000 1290240 paging_init+0x10c/0x584 phys=41010000 vmap

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 0/5] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-11 20:23 ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel

Having memory that is writable and executable at the same time is a
security hazard, and so we tend to avoid those when we can. However,
at boot time, we keep .text mapped writable during the entire init
phase, and the init region itself is mapped rwx as well.

Let's improve the situation by:
- making the alternatives patching use the linear mapping
- splitting the init region into separate text and data regions

This removes all RWX mappings except the really early one created
in head.S (which we could perhaps fix in the future as well)

Changes since v1:
- add patch to move TLB maintenance into create_mapping_late() and remove it
  from its callers (#2)
- use the true address not the linear alias when patching branch instructions,
  spotted by Suzuki (#3)
- mark mark_linear_text_alias_ro() __init (#3)
- move the .rela section back into __initdata: as it turns out, leaving a hole
  between the segments results in a peculiar situation where other unrelated
  allocations end up right in the middle of the kernel Image, which is
  probably a bad idea (#5). See below for an example.
- add acks

Ard Biesheuvel (5):
  arm: kvm: move kvm_vgic_global_state out of .text section
  arm64: mmu: move TLB maintenance from callers to create_mapping_late()
  arm64: alternatives: apply boot time fixups via the linear mapping
  arm64: mmu: map .text as read-only from the outset
  arm64: mmu: apply strict permissions to .init.text and .init.data

 arch/arm64/include/asm/mmu.h      |  1 +
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/alternative.c   |  2 +-
 arch/arm64/kernel/smp.c           |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 25 +++++++----
 arch/arm64/mm/mmu.c               | 45 +++++++++++++-------
 virt/kvm/arm/vgic/vgic.c          |  4 +-
 7 files changed, 53 insertions(+), 28 deletions(-)

-- 
2.7.4

The various kernel segments are vmapped from paging_init() [after inlining]

0xffffff8008080000-0xffffff80088b0000 8585216 paging_init+0x84/0x584 phys=40080000 vmap
0xffffff80088b0000-0xffffff8008cb0000 4194304 paging_init+0xa4/0x584 phys=408b0000 vmap
0xffffff8008cb0000-0xffffff8008d27000  487424 paging_init+0xc4/0x584 phys=40cb0000 vmap
0xffffff8008d27000-0xffffff8008da3000  507904 paging_init+0xe8/0x584 phys=40d27000 vmap
0xffffff8008dd1000-0xffffff8008dd3000    8192 devm_ioremap_nocache+0x54/0xa8 phys=a003000 ioremap
0xffffff8008dd3000-0xffffff8008dd5000    8192 devm_ioremap_nocache+0x54/0xa8 phys=a003000 ioremap
0xffffff8008dde000-0xffffff8008de0000    8192 pl031_probe+0x80/0x1e8 phys=9010000 ioremap
0xffffff8008e4c000-0xffffff8008e50000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e54000-0xffffff8008e58000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e80000-0xffffff8008e84000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e84000-0xffffff8008e88000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008ea0000-0xffffff8008ea2000    8192 bpf_prog_alloc+0x3c/0xb8 pages=1 vmalloc
0xffffff8008ef2000-0xffffff8008ef6000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008ef6000-0xffffff8008efa000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8009010000-0xffffff800914b000 1290240 paging_init+0x10c/0x584 phys=41010000 vmap

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] [PATCH v2 0/5] arm64: mmu: avoid writeable-executable mappings
@ 2017-02-11 20:23 ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: kvmarm, marc.zyngier, andre.przywara, Suzuki.Poulose,
	james.morse, keescook, kernel-hardening, Ard Biesheuvel

Having memory that is writable and executable at the same time is a
security hazard, and so we tend to avoid those when we can. However,
at boot time, we keep .text mapped writable during the entire init
phase, and the init region itself is mapped rwx as well.

Let's improve the situation by:
- making the alternatives patching use the linear mapping
- splitting the init region into separate text and data regions

This removes all RWX mappings except the really early one created
in head.S (which we could perhaps fix in the future as well)

Changes since v1:
- add patch to move TLB maintenance into create_mapping_late() and remove it
  from its callers (#2)
- use the true address not the linear alias when patching branch instructions,
  spotted by Suzuki (#3)
- mark mark_linear_text_alias_ro() __init (#3)
- move the .rela section back into __initdata: as it turns out, leaving a hole
  between the segments results in a peculiar situation where other unrelated
  allocations end up right in the middle of the kernel Image, which is
  probably a bad idea (#5). See below for an example.
- add acks

Ard Biesheuvel (5):
  arm: kvm: move kvm_vgic_global_state out of .text section
  arm64: mmu: move TLB maintenance from callers to create_mapping_late()
  arm64: alternatives: apply boot time fixups via the linear mapping
  arm64: mmu: map .text as read-only from the outset
  arm64: mmu: apply strict permissions to .init.text and .init.data

 arch/arm64/include/asm/mmu.h      |  1 +
 arch/arm64/include/asm/sections.h |  3 +-
 arch/arm64/kernel/alternative.c   |  2 +-
 arch/arm64/kernel/smp.c           |  1 +
 arch/arm64/kernel/vmlinux.lds.S   | 25 +++++++----
 arch/arm64/mm/mmu.c               | 45 +++++++++++++-------
 virt/kvm/arm/vgic/vgic.c          |  4 +-
 7 files changed, 53 insertions(+), 28 deletions(-)

-- 
2.7.4

The various kernel segments are vmapped from paging_init() [after inlining]

0xffffff8008080000-0xffffff80088b0000 8585216 paging_init+0x84/0x584 phys=40080000 vmap
0xffffff80088b0000-0xffffff8008cb0000 4194304 paging_init+0xa4/0x584 phys=408b0000 vmap
0xffffff8008cb0000-0xffffff8008d27000  487424 paging_init+0xc4/0x584 phys=40cb0000 vmap
0xffffff8008d27000-0xffffff8008da3000  507904 paging_init+0xe8/0x584 phys=40d27000 vmap
0xffffff8008dd1000-0xffffff8008dd3000    8192 devm_ioremap_nocache+0x54/0xa8 phys=a003000 ioremap
0xffffff8008dd3000-0xffffff8008dd5000    8192 devm_ioremap_nocache+0x54/0xa8 phys=a003000 ioremap
0xffffff8008dde000-0xffffff8008de0000    8192 pl031_probe+0x80/0x1e8 phys=9010000 ioremap
0xffffff8008e4c000-0xffffff8008e50000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e54000-0xffffff8008e58000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e80000-0xffffff8008e84000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008e84000-0xffffff8008e88000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008ea0000-0xffffff8008ea2000    8192 bpf_prog_alloc+0x3c/0xb8 pages=1 vmalloc
0xffffff8008ef2000-0xffffff8008ef6000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8008ef6000-0xffffff8008efa000   16384 n_tty_open+0x1c/0xd0 pages=3 vmalloc
0xffffff8009010000-0xffffff800914b000 1290240 paging_init+0x10c/0x584 phys=41010000 vmap

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 1/5] arm: kvm: move kvm_vgic_global_state out of .text section
  2017-02-11 20:23 ` Ard Biesheuvel
  (?)
@ 2017-02-11 20:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: Ard Biesheuvel, keescook, marc.zyngier, andre.przywara,
	kernel-hardening, kvmarm

The kvm_vgic_global_state struct contains a static key which is
written to by jump_label_init() at boot time. So in preparation of
making .text regions truly (well, almost truly) read-only, mark
kvm_vgic_global_state __ro_after_init so it moves to the .rodata
section instead.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 virt/kvm/arm/vgic/vgic.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 6440b56ec90e..2f373455ed4e 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -29,7 +29,9 @@
 #define DEBUG_SPINLOCK_BUG_ON(p)
 #endif
 
-struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
+struct vgic_global kvm_vgic_global_state __ro_after_init = {
+	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
+};
 
 /*
  * Locking order is always:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 1/5] arm: kvm: move kvm_vgic_global_state out of .text section
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel

The kvm_vgic_global_state struct contains a static key which is
written to by jump_label_init() at boot time. So in preparation of
making .text regions truly (well, almost truly) read-only, mark
kvm_vgic_global_state __ro_after_init so it moves to the .rodata
section instead.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 virt/kvm/arm/vgic/vgic.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 6440b56ec90e..2f373455ed4e 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -29,7 +29,9 @@
 #define DEBUG_SPINLOCK_BUG_ON(p)
 #endif
 
-struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
+struct vgic_global kvm_vgic_global_state __ro_after_init = {
+	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
+};
 
 /*
  * Locking order is always:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [kernel-hardening] [PATCH v2 1/5] arm: kvm: move kvm_vgic_global_state out of .text section
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: kvmarm, marc.zyngier, andre.przywara, Suzuki.Poulose,
	james.morse, keescook, kernel-hardening, Ard Biesheuvel

The kvm_vgic_global_state struct contains a static key which is
written to by jump_label_init() at boot time. So in preparation of
making .text regions truly (well, almost truly) read-only, mark
kvm_vgic_global_state __ro_after_init so it moves to the .rodata
section instead.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 virt/kvm/arm/vgic/vgic.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 6440b56ec90e..2f373455ed4e 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -29,7 +29,9 @@
 #define DEBUG_SPINLOCK_BUG_ON(p)
 #endif
 
-struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
+struct vgic_global kvm_vgic_global_state __ro_after_init = {
+	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
+};
 
 /*
  * Locking order is always:
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late()
  2017-02-11 20:23 ` Ard Biesheuvel
  (?)
@ 2017-02-11 20:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: Ard Biesheuvel, keescook, marc.zyngier, andre.przywara,
	kernel-hardening, kvmarm

In preparation of changing the way we invoke create_mapping_late() (which
is currently invoked twice from the same function), move the TLB flushing
it performs from the caller into create_mapping_late() itself, and change
it to a TLB maintenance by VA rather than a full flush, which is more
appropriate here.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2131521ddc24..9e0ec1a8cd3b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -356,6 +356,9 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 
 	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
 			     NULL, debug_pagealloc_enabled());
+
+	/* flush the TLBs after updating live kernel mappings */
+	flush_tlb_kernel_range(virt, virt + size);
 }
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
@@ -438,9 +441,6 @@ void mark_rodata_ro(void)
 	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
 			    section_size, PAGE_KERNEL_RO);
 
-	/* flush the TLBs after updating live kernel mappings */
-	flush_tlb_all();
-
 	debug_checkwx();
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late()
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation of changing the way we invoke create_mapping_late() (which
is currently invoked twice from the same function), move the TLB flushing
it performs from the caller into create_mapping_late() itself, and change
it to a TLB maintenance by VA rather than a full flush, which is more
appropriate here.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2131521ddc24..9e0ec1a8cd3b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -356,6 +356,9 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 
 	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
 			     NULL, debug_pagealloc_enabled());
+
+	/* flush the TLBs after updating live kernel mappings */
+	flush_tlb_kernel_range(virt, virt + size);
 }
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
@@ -438,9 +441,6 @@ void mark_rodata_ro(void)
 	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
 			    section_size, PAGE_KERNEL_RO);
 
-	/* flush the TLBs after updating live kernel mappings */
-	flush_tlb_all();
-
 	debug_checkwx();
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [kernel-hardening] [PATCH v2 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late()
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: kvmarm, marc.zyngier, andre.przywara, Suzuki.Poulose,
	james.morse, keescook, kernel-hardening, Ard Biesheuvel

In preparation of changing the way we invoke create_mapping_late() (which
is currently invoked twice from the same function), move the TLB flushing
it performs from the caller into create_mapping_late() itself, and change
it to a TLB maintenance by VA rather than a full flush, which is more
appropriate here.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2131521ddc24..9e0ec1a8cd3b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -356,6 +356,9 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 
 	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
 			     NULL, debug_pagealloc_enabled());
+
+	/* flush the TLBs after updating live kernel mappings */
+	flush_tlb_kernel_range(virt, virt + size);
 }
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
@@ -438,9 +441,6 @@ void mark_rodata_ro(void)
 	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
 			    section_size, PAGE_KERNEL_RO);
 
-	/* flush the TLBs after updating live kernel mappings */
-	flush_tlb_all();
-
 	debug_checkwx();
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 3/5] arm64: alternatives: apply boot time fixups via the linear mapping
  2017-02-11 20:23 ` Ard Biesheuvel
  (?)
@ 2017-02-11 20:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: Ard Biesheuvel, keescook, marc.zyngier, andre.przywara,
	kernel-hardening, kvmarm

One important rule of thumb when desiging a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.

For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu.h    |  1 +
 arch/arm64/kernel/alternative.c |  2 +-
 arch/arm64/kernel/smp.c         |  1 +
 arch/arm64/mm/mmu.c             | 22 +++++++++++++++-----
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 47619411f0ff..5468c834b072 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
+extern void mark_linear_text_alias_ro(void);
 
 #endif
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 06d650f61da7..8cee29d9bc07 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -128,7 +128,7 @@ static void __apply_alternatives(void *alt_region)
 
 		for (i = 0; i < nr_inst; i++) {
 			insn = get_alt_insn(alt, origptr + i, replptr + i);
-			*(origptr + i) = cpu_to_le32(insn);
+			((u32 *)lm_alias(origptr))[i] = cpu_to_le32(insn);
 		}
 
 		flush_icache_range((uintptr_t)origptr,
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a8ec5da530af..d6307e311a10 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	setup_cpu_features();
 	hyp_mode_check();
 	apply_alternatives_all();
+	mark_linear_text_alias_ro();
 }
 
 void __init smp_prepare_boot_cpu(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9e0ec1a8cd3b..7ed981c7f4c0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -398,16 +398,28 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     debug_pagealloc_enabled());
 
 	/*
-	 * Map the linear alias of the [_text, __init_begin) interval as
-	 * read-only/non-executable. This makes the contents of the
-	 * region accessible to subsystems such as hibernate, but
-	 * protects it from inadvertent modification or execution.
+	 * Map the linear alias of the [_text, __init_begin) interval
+	 * as non-executable now, and remove the write permission in
+	 * mark_linear_text_alias_ro() below (which will be called after
+	 * alternative patching has completed). This makes the contents
+	 * of the region accessible to subsystems such as hibernate,
+	 * but protects it from inadvertent modification or execution.
 	 */
 	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
-			     kernel_end - kernel_start, PAGE_KERNEL_RO,
+			     kernel_end - kernel_start, PAGE_KERNEL,
 			     early_pgtable_alloc, debug_pagealloc_enabled());
 }
 
+void __init mark_linear_text_alias_ro(void)
+{
+	/*
+	 * Remove the write permissions from the linear alias of .text/.rodata
+	 */
+	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
+			    (unsigned long)__init_begin - (unsigned long)_text,
+			    PAGE_KERNEL_RO);
+}
+
 static void __init map_mem(pgd_t *pgd)
 {
 	struct memblock_region *reg;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 3/5] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel

One important rule of thumb when desiging a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.

For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu.h    |  1 +
 arch/arm64/kernel/alternative.c |  2 +-
 arch/arm64/kernel/smp.c         |  1 +
 arch/arm64/mm/mmu.c             | 22 +++++++++++++++-----
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 47619411f0ff..5468c834b072 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
+extern void mark_linear_text_alias_ro(void);
 
 #endif
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 06d650f61da7..8cee29d9bc07 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -128,7 +128,7 @@ static void __apply_alternatives(void *alt_region)
 
 		for (i = 0; i < nr_inst; i++) {
 			insn = get_alt_insn(alt, origptr + i, replptr + i);
-			*(origptr + i) = cpu_to_le32(insn);
+			((u32 *)lm_alias(origptr))[i] = cpu_to_le32(insn);
 		}
 
 		flush_icache_range((uintptr_t)origptr,
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a8ec5da530af..d6307e311a10 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	setup_cpu_features();
 	hyp_mode_check();
 	apply_alternatives_all();
+	mark_linear_text_alias_ro();
 }
 
 void __init smp_prepare_boot_cpu(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9e0ec1a8cd3b..7ed981c7f4c0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -398,16 +398,28 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     debug_pagealloc_enabled());
 
 	/*
-	 * Map the linear alias of the [_text, __init_begin) interval as
-	 * read-only/non-executable. This makes the contents of the
-	 * region accessible to subsystems such as hibernate, but
-	 * protects it from inadvertent modification or execution.
+	 * Map the linear alias of the [_text, __init_begin) interval
+	 * as non-executable now, and remove the write permission in
+	 * mark_linear_text_alias_ro() below (which will be called after
+	 * alternative patching has completed). This makes the contents
+	 * of the region accessible to subsystems such as hibernate,
+	 * but protects it from inadvertent modification or execution.
 	 */
 	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
-			     kernel_end - kernel_start, PAGE_KERNEL_RO,
+			     kernel_end - kernel_start, PAGE_KERNEL,
 			     early_pgtable_alloc, debug_pagealloc_enabled());
 }
 
+void __init mark_linear_text_alias_ro(void)
+{
+	/*
+	 * Remove the write permissions from the linear alias of .text/.rodata
+	 */
+	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
+			    (unsigned long)__init_begin - (unsigned long)_text,
+			    PAGE_KERNEL_RO);
+}
+
 static void __init map_mem(pgd_t *pgd)
 {
 	struct memblock_region *reg;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [kernel-hardening] [PATCH v2 3/5] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: kvmarm, marc.zyngier, andre.przywara, Suzuki.Poulose,
	james.morse, keescook, kernel-hardening, Ard Biesheuvel

One important rule of thumb when desiging a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.

For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/mmu.h    |  1 +
 arch/arm64/kernel/alternative.c |  2 +-
 arch/arm64/kernel/smp.c         |  1 +
 arch/arm64/mm/mmu.c             | 22 +++++++++++++++-----
 4 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 47619411f0ff..5468c834b072 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 			       unsigned long virt, phys_addr_t size,
 			       pgprot_t prot, bool page_mappings_only);
 extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
+extern void mark_linear_text_alias_ro(void);
 
 #endif
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 06d650f61da7..8cee29d9bc07 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -128,7 +128,7 @@ static void __apply_alternatives(void *alt_region)
 
 		for (i = 0; i < nr_inst; i++) {
 			insn = get_alt_insn(alt, origptr + i, replptr + i);
-			*(origptr + i) = cpu_to_le32(insn);
+			((u32 *)lm_alias(origptr))[i] = cpu_to_le32(insn);
 		}
 
 		flush_icache_range((uintptr_t)origptr,
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index a8ec5da530af..d6307e311a10 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
 	setup_cpu_features();
 	hyp_mode_check();
 	apply_alternatives_all();
+	mark_linear_text_alias_ro();
 }
 
 void __init smp_prepare_boot_cpu(void)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 9e0ec1a8cd3b..7ed981c7f4c0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -398,16 +398,28 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     debug_pagealloc_enabled());
 
 	/*
-	 * Map the linear alias of the [_text, __init_begin) interval as
-	 * read-only/non-executable. This makes the contents of the
-	 * region accessible to subsystems such as hibernate, but
-	 * protects it from inadvertent modification or execution.
+	 * Map the linear alias of the [_text, __init_begin) interval
+	 * as non-executable now, and remove the write permission in
+	 * mark_linear_text_alias_ro() below (which will be called after
+	 * alternative patching has completed). This makes the contents
+	 * of the region accessible to subsystems such as hibernate,
+	 * but protects it from inadvertent modification or execution.
 	 */
 	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
-			     kernel_end - kernel_start, PAGE_KERNEL_RO,
+			     kernel_end - kernel_start, PAGE_KERNEL,
 			     early_pgtable_alloc, debug_pagealloc_enabled());
 }
 
+void __init mark_linear_text_alias_ro(void)
+{
+	/*
+	 * Remove the write permissions from the linear alias of .text/.rodata
+	 */
+	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
+			    (unsigned long)__init_begin - (unsigned long)_text,
+			    PAGE_KERNEL_RO);
+}
+
 static void __init map_mem(pgd_t *pgd)
 {
 	struct memblock_region *reg;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
  2017-02-11 20:23 ` Ard Biesheuvel
  (?)
@ 2017-02-11 20:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: Ard Biesheuvel, keescook, marc.zyngier, andre.przywara,
	kernel-hardening, kvmarm

Now that alternatives patching code no longer relies on the primary
mapping of .text being writable, we can remove the code that removes
the writable permissions post-init time, and map it read-only from
the outset.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 7ed981c7f4c0..e97f1ce967ec 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -442,9 +442,6 @@ void mark_rodata_ro(void)
 {
 	unsigned long section_size;
 
-	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
-			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
@@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
 
-	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
+	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
 	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
 	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
 			   &vmlinux_init);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel

Now that alternatives patching code no longer relies on the primary
mapping of .text being writable, we can remove the code that removes
the writable permissions post-init time, and map it read-only from
the outset.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 7ed981c7f4c0..e97f1ce967ec 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -442,9 +442,6 @@ void mark_rodata_ro(void)
 {
 	unsigned long section_size;
 
-	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
-			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
@@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
 
-	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
+	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
 	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
 	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
 			   &vmlinux_init);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [kernel-hardening] [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: kvmarm, marc.zyngier, andre.przywara, Suzuki.Poulose,
	james.morse, keescook, kernel-hardening, Ard Biesheuvel

Now that alternatives patching code no longer relies on the primary
mapping of .text being writable, we can remove the code that removes
the writable permissions post-init time, and map it read-only from
the outset.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 7ed981c7f4c0..e97f1ce967ec 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -442,9 +442,6 @@ void mark_rodata_ro(void)
 {
 	unsigned long section_size;
 
-	section_size = (unsigned long)_etext - (unsigned long)_text;
-	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
-			    section_size, PAGE_KERNEL_ROX);
 	/*
 	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
 	 * to cover NOTES and EXCEPTION_TABLE.
@@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
 {
 	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
 
-	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
+	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
 	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
 	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
 			   &vmlinux_init);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 5/5] arm64: mmu: apply strict permissions to .init.text and .init.data
  2017-02-11 20:23 ` Ard Biesheuvel
  (?)
@ 2017-02-11 20:23   ` Ard Biesheuvel
  -1 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: Ard Biesheuvel, keescook, marc.zyngier, andre.przywara,
	kernel-hardening, kvmarm

To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.

This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).

Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/sections.h |  3 ++-
 arch/arm64/kernel/vmlinux.lds.S   | 25 +++++++++++++-------
 arch/arm64/mm/mmu.c               | 12 ++++++----
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index 4e7e7067afdb..22582819b2e5 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -24,7 +24,8 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
 extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
+extern char __initdata_begin[], __initdata_end[];
+extern char __inittext_begin[], __inittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
-
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index b8deffa9e1bf..2c93d259046c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -143,12 +143,27 @@ SECTIONS
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__init_begin = .;
+	__inittext_begin = .;
 
 	INIT_TEXT_SECTION(8)
 	.exit.text : {
 		ARM_EXIT_KEEP(EXIT_TEXT)
 	}
 
+	. = ALIGN(4);
+	.altinstructions : {
+		__alt_instructions = .;
+		*(.altinstructions)
+		__alt_instructions_end = .;
+	}
+	.altinstr_replacement : {
+		*(.altinstr_replacement)
+	}
+
+	. = ALIGN(PAGE_SIZE);
+	__inittext_end = .;
+	__initdata_begin = .;
+
 	.init.data : {
 		INIT_DATA
 		INIT_SETUP(16)
@@ -164,15 +179,6 @@ SECTIONS
 
 	PERCPU_SECTION(L1_CACHE_BYTES)
 
-	. = ALIGN(4);
-	.altinstructions : {
-		__alt_instructions = .;
-		*(.altinstructions)
-		__alt_instructions_end = .;
-	}
-	.altinstr_replacement : {
-		*(.altinstr_replacement)
-	}
 	.rela : ALIGN(8) {
 		*(.rela .rela*)
 	}
@@ -181,6 +187,7 @@ SECTIONS
 	__rela_size	= SIZEOF(.rela);
 
 	. = ALIGN(SEGMENT_ALIGN);
+	__initdata_end = .;
 	__init_end = .;
 
 	_data = .;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e97f1ce967ec..c53c43b4ed3f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -479,12 +479,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
  */
 static void __init map_kernel(pgd_t *pgd)
 {
-	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
+	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
+				vmlinux_initdata, vmlinux_data;
 
 	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
-	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
-	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
-			   &vmlinux_init);
+	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
+			   &vmlinux_rodata);
+	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
+			   &vmlinux_inittext);
+	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
+			   &vmlinux_initdata);
 	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
 
 	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v2 5/5] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel

To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.

This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).

Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/sections.h |  3 ++-
 arch/arm64/kernel/vmlinux.lds.S   | 25 +++++++++++++-------
 arch/arm64/mm/mmu.c               | 12 ++++++----
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index 4e7e7067afdb..22582819b2e5 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -24,7 +24,8 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
 extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
+extern char __initdata_begin[], __initdata_end[];
+extern char __inittext_begin[], __inittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
-
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index b8deffa9e1bf..2c93d259046c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -143,12 +143,27 @@ SECTIONS
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__init_begin = .;
+	__inittext_begin = .;
 
 	INIT_TEXT_SECTION(8)
 	.exit.text : {
 		ARM_EXIT_KEEP(EXIT_TEXT)
 	}
 
+	. = ALIGN(4);
+	.altinstructions : {
+		__alt_instructions = .;
+		*(.altinstructions)
+		__alt_instructions_end = .;
+	}
+	.altinstr_replacement : {
+		*(.altinstr_replacement)
+	}
+
+	. = ALIGN(PAGE_SIZE);
+	__inittext_end = .;
+	__initdata_begin = .;
+
 	.init.data : {
 		INIT_DATA
 		INIT_SETUP(16)
@@ -164,15 +179,6 @@ SECTIONS
 
 	PERCPU_SECTION(L1_CACHE_BYTES)
 
-	. = ALIGN(4);
-	.altinstructions : {
-		__alt_instructions = .;
-		*(.altinstructions)
-		__alt_instructions_end = .;
-	}
-	.altinstr_replacement : {
-		*(.altinstr_replacement)
-	}
 	.rela : ALIGN(8) {
 		*(.rela .rela*)
 	}
@@ -181,6 +187,7 @@ SECTIONS
 	__rela_size	= SIZEOF(.rela);
 
 	. = ALIGN(SEGMENT_ALIGN);
+	__initdata_end = .;
 	__init_end = .;
 
 	_data = .;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e97f1ce967ec..c53c43b4ed3f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -479,12 +479,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
  */
 static void __init map_kernel(pgd_t *pgd)
 {
-	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
+	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
+				vmlinux_initdata, vmlinux_data;
 
 	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
-	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
-	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
-			   &vmlinux_init);
+	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
+			   &vmlinux_rodata);
+	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
+			   &vmlinux_inittext);
+	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
+			   &vmlinux_initdata);
 	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
 
 	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [kernel-hardening] [PATCH v2 5/5] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-11 20:23   ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-11 20:23 UTC (permalink / raw)
  To: linux-arm-kernel, mark.rutland, catalin.marinas, will.deacon, labbott
  Cc: kvmarm, marc.zyngier, andre.przywara, Suzuki.Poulose,
	james.morse, keescook, kernel-hardening, Ard Biesheuvel

To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.

This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).

Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/sections.h |  3 ++-
 arch/arm64/kernel/vmlinux.lds.S   | 25 +++++++++++++-------
 arch/arm64/mm/mmu.c               | 12 ++++++----
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/sections.h b/arch/arm64/include/asm/sections.h
index 4e7e7067afdb..22582819b2e5 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -24,7 +24,8 @@ extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
 extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
 extern char __hyp_text_start[], __hyp_text_end[];
 extern char __idmap_text_start[], __idmap_text_end[];
+extern char __initdata_begin[], __initdata_end[];
+extern char __inittext_begin[], __inittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
-
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index b8deffa9e1bf..2c93d259046c 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -143,12 +143,27 @@ SECTIONS
 
 	. = ALIGN(SEGMENT_ALIGN);
 	__init_begin = .;
+	__inittext_begin = .;
 
 	INIT_TEXT_SECTION(8)
 	.exit.text : {
 		ARM_EXIT_KEEP(EXIT_TEXT)
 	}
 
+	. = ALIGN(4);
+	.altinstructions : {
+		__alt_instructions = .;
+		*(.altinstructions)
+		__alt_instructions_end = .;
+	}
+	.altinstr_replacement : {
+		*(.altinstr_replacement)
+	}
+
+	. = ALIGN(PAGE_SIZE);
+	__inittext_end = .;
+	__initdata_begin = .;
+
 	.init.data : {
 		INIT_DATA
 		INIT_SETUP(16)
@@ -164,15 +179,6 @@ SECTIONS
 
 	PERCPU_SECTION(L1_CACHE_BYTES)
 
-	. = ALIGN(4);
-	.altinstructions : {
-		__alt_instructions = .;
-		*(.altinstructions)
-		__alt_instructions_end = .;
-	}
-	.altinstr_replacement : {
-		*(.altinstr_replacement)
-	}
 	.rela : ALIGN(8) {
 		*(.rela .rela*)
 	}
@@ -181,6 +187,7 @@ SECTIONS
 	__rela_size	= SIZEOF(.rela);
 
 	. = ALIGN(SEGMENT_ALIGN);
+	__initdata_end = .;
 	__init_end = .;
 
 	_data = .;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index e97f1ce967ec..c53c43b4ed3f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -479,12 +479,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
  */
 static void __init map_kernel(pgd_t *pgd)
 {
-	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
+	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
+				vmlinux_initdata, vmlinux_data;
 
 	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
-	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
-	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
-			   &vmlinux_init);
+	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
+			   &vmlinux_rodata);
+	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
+			   &vmlinux_inittext);
+	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
+			   &vmlinux_initdata);
 	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
 
 	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 1/5] arm: kvm: move kvm_vgic_global_state out of .text section
  2017-02-11 20:23   ` Ard Biesheuvel
@ 2017-02-13 17:58     ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-13 17:58 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening

On Sat, Feb 11, 2017 at 08:23:02PM +0000, Ard Biesheuvel wrote:
> The kvm_vgic_global_state struct contains a static key which is
> written to by jump_label_init() at boot time. So in preparation of
> making .text regions truly (well, almost truly) read-only, mark
> kvm_vgic_global_state __ro_after_init so it moves to the .rodata
> section instead.
> 
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

With this applied I can boot Juno happily and launch working VMs.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  virt/kvm/arm/vgic/vgic.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 6440b56ec90e..2f373455ed4e 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -29,7 +29,9 @@
>  #define DEBUG_SPINLOCK_BUG_ON(p)
>  #endif
>  
> -struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
> +struct vgic_global kvm_vgic_global_state __ro_after_init = {
> +	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
> +};
>  
>  /*
>   * Locking order is always:
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 1/5] arm: kvm: move kvm_vgic_global_state out of .text section
@ 2017-02-13 17:58     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-13 17:58 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening

On Sat, Feb 11, 2017 at 08:23:02PM +0000, Ard Biesheuvel wrote:
> The kvm_vgic_global_state struct contains a static key which is
> written to by jump_label_init() at boot time. So in preparation of
> making .text regions truly (well, almost truly) read-only, mark
> kvm_vgic_global_state __ro_after_init so it moves to the .rodata
> section instead.
> 
> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

With this applied I can boot Juno happily and launch working VMs.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  virt/kvm/arm/vgic/vgic.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
> index 6440b56ec90e..2f373455ed4e 100644
> --- a/virt/kvm/arm/vgic/vgic.c
> +++ b/virt/kvm/arm/vgic/vgic.c
> @@ -29,7 +29,9 @@
>  #define DEBUG_SPINLOCK_BUG_ON(p)
>  #endif
>  
> -struct vgic_global __section(.hyp.text) kvm_vgic_global_state = {.gicv3_cpuif = STATIC_KEY_FALSE_INIT,};
> +struct vgic_global kvm_vgic_global_state __ro_after_init = {
> +	.gicv3_cpuif = STATIC_KEY_FALSE_INIT,
> +};
>  
>  /*
>   * Locking order is always:
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late()
  2017-02-11 20:23   ` Ard Biesheuvel
  (?)
@ 2017-02-14 15:54     ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:54 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott

On Sat, Feb 11, 2017 at 08:23:03PM +0000, Ard Biesheuvel wrote:
> In preparation of changing the way we invoke create_mapping_late() (which
> is currently invoked twice from the same function), move the TLB flushing
> it performs from the caller into create_mapping_late() itself, and change
> it to a TLB maintenance by VA rather than a full flush, which is more
> appropriate here.

It's not immediately clear what's meant by "changing the way we invoke
create_mapping_late()" here.

It's probably worth explicitly mentioning that we need to add another
caller of create_mapping_late(), and this saves us adding (overly
strong) TLB maintenance to all callers.

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/mm/mmu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2131521ddc24..9e0ec1a8cd3b 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -356,6 +356,9 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
>  
>  	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
>  			     NULL, debug_pagealloc_enabled());
> +
> +	/* flush the TLBs after updating live kernel mappings */
> +	flush_tlb_kernel_range(virt, virt + size);
>  }

It feels  a little odd to have the maintenance here given we still call
this *create*_mapping_late.

Given the only users of this are changing permissions, perhaps we should
rename this to change_mapping_prot(), or something like that?

Otherwise, this looks fine to me, and boots fine. Either way:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

>  static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
> @@ -438,9 +441,6 @@ void mark_rodata_ro(void)
>  	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
>  			    section_size, PAGE_KERNEL_RO);
>  
> -	/* flush the TLBs after updating live kernel mappings */
> -	flush_tlb_all();
> -
>  	debug_checkwx();
>  }
>  
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late()
@ 2017-02-14 15:54     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Feb 11, 2017 at 08:23:03PM +0000, Ard Biesheuvel wrote:
> In preparation of changing the way we invoke create_mapping_late() (which
> is currently invoked twice from the same function), move the TLB flushing
> it performs from the caller into create_mapping_late() itself, and change
> it to a TLB maintenance by VA rather than a full flush, which is more
> appropriate here.

It's not immediately clear what's meant by "changing the way we invoke
create_mapping_late()" here.

It's probably worth explicitly mentioning that we need to add another
caller of create_mapping_late(), and this saves us adding (overly
strong) TLB maintenance to all callers.

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/mm/mmu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2131521ddc24..9e0ec1a8cd3b 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -356,6 +356,9 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
>  
>  	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
>  			     NULL, debug_pagealloc_enabled());
> +
> +	/* flush the TLBs after updating live kernel mappings */
> +	flush_tlb_kernel_range(virt, virt + size);
>  }

It feels  a little odd to have the maintenance here given we still call
this *create*_mapping_late.

Given the only users of this are changing permissions, perhaps we should
rename this to change_mapping_prot(), or something like that?

Otherwise, this looks fine to me, and boots fine. Either way:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

>  static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
> @@ -438,9 +441,6 @@ void mark_rodata_ro(void)
>  	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
>  			    section_size, PAGE_KERNEL_RO);
>  
> -	/* flush the TLBs after updating live kernel mappings */
> -	flush_tlb_all();
> -
>  	debug_checkwx();
>  }
>  
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late()
@ 2017-02-14 15:54     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:54 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd

On Sat, Feb 11, 2017 at 08:23:03PM +0000, Ard Biesheuvel wrote:
> In preparation of changing the way we invoke create_mapping_late() (which
> is currently invoked twice from the same function), move the TLB flushing
> it performs from the caller into create_mapping_late() itself, and change
> it to a TLB maintenance by VA rather than a full flush, which is more
> appropriate here.

It's not immediately clear what's meant by "changing the way we invoke
create_mapping_late()" here.

It's probably worth explicitly mentioning that we need to add another
caller of create_mapping_late(), and this saves us adding (overly
strong) TLB maintenance to all callers.

> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/mm/mmu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 2131521ddc24..9e0ec1a8cd3b 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -356,6 +356,9 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
>  
>  	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot,
>  			     NULL, debug_pagealloc_enabled());
> +
> +	/* flush the TLBs after updating live kernel mappings */
> +	flush_tlb_kernel_range(virt, virt + size);
>  }

It feels  a little odd to have the maintenance here given we still call
this *create*_mapping_late.

Given the only users of this are changing permissions, perhaps we should
rename this to change_mapping_prot(), or something like that?

Otherwise, this looks fine to me, and boots fine. Either way:

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

>  static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
> @@ -438,9 +441,6 @@ void mark_rodata_ro(void)
>  	create_mapping_late(__pa_symbol(__start_rodata), (unsigned long)__start_rodata,
>  			    section_size, PAGE_KERNEL_RO);
>  
> -	/* flush the TLBs after updating live kernel mappings */
> -	flush_tlb_all();
> -
>  	debug_checkwx();
>  }
>  
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 3/5] arm64: alternatives: apply boot time fixups via the linear mapping
  2017-02-11 20:23   ` Ard Biesheuvel
  (?)
@ 2017-02-14 15:56     ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:56 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott

On Sat, Feb 11, 2017 at 08:23:04PM +0000, Ard Biesheuvel wrote:
> One important rule of thumb when desiging a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
> 
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  2 +-
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 22 +++++++++++++++-----
>  4 files changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       unsigned long virt, phys_addr_t size,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>  
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..8cee29d9bc07 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -128,7 +128,7 @@ static void __apply_alternatives(void *alt_region)
>  
>  		for (i = 0; i < nr_inst; i++) {
>  			insn = get_alt_insn(alt, origptr + i, replptr + i);
> -			*(origptr + i) = cpu_to_le32(insn);
> +			((u32 *)lm_alias(origptr))[i] = cpu_to_le32(insn);
>  		}
>  
>  		flush_icache_range((uintptr_t)origptr,
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index a8ec5da530af..d6307e311a10 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>  	setup_cpu_features();
>  	hyp_mode_check();
>  	apply_alternatives_all();
> +	mark_linear_text_alias_ro();
>  }
>  
>  void __init smp_prepare_boot_cpu(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9e0ec1a8cd3b..7ed981c7f4c0 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -398,16 +398,28 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
>  				     debug_pagealloc_enabled());
>  
>  	/*
> -	 * Map the linear alias of the [_text, __init_begin) interval as
> -	 * read-only/non-executable. This makes the contents of the
> -	 * region accessible to subsystems such as hibernate, but
> -	 * protects it from inadvertent modification or execution.
> +	 * Map the linear alias of the [_text, __init_begin) interval
> +	 * as non-executable now, and remove the write permission in
> +	 * mark_linear_text_alias_ro() below (which will be called after
> +	 * alternative patching has completed). This makes the contents
> +	 * of the region accessible to subsystems such as hibernate,
> +	 * but protects it from inadvertent modification or execution.
>  	 */
>  	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
> -			     kernel_end - kernel_start, PAGE_KERNEL_RO,
> +			     kernel_end - kernel_start, PAGE_KERNEL,
>  			     early_pgtable_alloc, debug_pagealloc_enabled());
>  }
>  
> +void __init mark_linear_text_alias_ro(void)
> +{
> +	/*
> +	 * Remove the write permissions from the linear alias of .text/.rodata
> +	 */
> +	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
> +			    (unsigned long)__init_begin - (unsigned long)_text,
> +			    PAGE_KERNEL_RO);
> +}
> +
>  static void __init map_mem(pgd_t *pgd)
>  {
>  	struct memblock_region *reg;
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 3/5] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-14 15:56     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Feb 11, 2017 at 08:23:04PM +0000, Ard Biesheuvel wrote:
> One important rule of thumb when desiging a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
> 
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  2 +-
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 22 +++++++++++++++-----
>  4 files changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       unsigned long virt, phys_addr_t size,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>  
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..8cee29d9bc07 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -128,7 +128,7 @@ static void __apply_alternatives(void *alt_region)
>  
>  		for (i = 0; i < nr_inst; i++) {
>  			insn = get_alt_insn(alt, origptr + i, replptr + i);
> -			*(origptr + i) = cpu_to_le32(insn);
> +			((u32 *)lm_alias(origptr))[i] = cpu_to_le32(insn);
>  		}
>  
>  		flush_icache_range((uintptr_t)origptr,
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index a8ec5da530af..d6307e311a10 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>  	setup_cpu_features();
>  	hyp_mode_check();
>  	apply_alternatives_all();
> +	mark_linear_text_alias_ro();
>  }
>  
>  void __init smp_prepare_boot_cpu(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9e0ec1a8cd3b..7ed981c7f4c0 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -398,16 +398,28 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
>  				     debug_pagealloc_enabled());
>  
>  	/*
> -	 * Map the linear alias of the [_text, __init_begin) interval as
> -	 * read-only/non-executable. This makes the contents of the
> -	 * region accessible to subsystems such as hibernate, but
> -	 * protects it from inadvertent modification or execution.
> +	 * Map the linear alias of the [_text, __init_begin) interval
> +	 * as non-executable now, and remove the write permission in
> +	 * mark_linear_text_alias_ro() below (which will be called after
> +	 * alternative patching has completed). This makes the contents
> +	 * of the region accessible to subsystems such as hibernate,
> +	 * but protects it from inadvertent modification or execution.
>  	 */
>  	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
> -			     kernel_end - kernel_start, PAGE_KERNEL_RO,
> +			     kernel_end - kernel_start, PAGE_KERNEL,
>  			     early_pgtable_alloc, debug_pagealloc_enabled());
>  }
>  
> +void __init mark_linear_text_alias_ro(void)
> +{
> +	/*
> +	 * Remove the write permissions from the linear alias of .text/.rodata
> +	 */
> +	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
> +			    (unsigned long)__init_begin - (unsigned long)_text,
> +			    PAGE_KERNEL_RO);
> +}
> +
>  static void __init map_mem(pgd_t *pgd)
>  {
>  	struct memblock_region *reg;
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 3/5] arm64: alternatives: apply boot time fixups via the linear mapping
@ 2017-02-14 15:56     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:56 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd

On Sat, Feb 11, 2017 at 08:23:04PM +0000, Ard Biesheuvel wrote:
> One important rule of thumb when desiging a secure software system is
> that memory should never be writable and executable at the same time.
> We mostly adhere to this rule in the kernel, except at boot time, when
> regions may be mapped RWX until after we are done applying alternatives
> or making other one-off changes.
> 
> For the alternative patching, we can improve the situation by applying
> the fixups via the linear mapping, which is never mapped with executable
> permissions. So map the linear alias of .text with RW- permissions
> initially, and remove the write permissions as soon as alternative
> patching has completed.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/mmu.h    |  1 +
>  arch/arm64/kernel/alternative.c |  2 +-
>  arch/arm64/kernel/smp.c         |  1 +
>  arch/arm64/mm/mmu.c             | 22 +++++++++++++++-----
>  4 files changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
> index 47619411f0ff..5468c834b072 100644
> --- a/arch/arm64/include/asm/mmu.h
> +++ b/arch/arm64/include/asm/mmu.h
> @@ -37,5 +37,6 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  			       unsigned long virt, phys_addr_t size,
>  			       pgprot_t prot, bool page_mappings_only);
>  extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
> +extern void mark_linear_text_alias_ro(void);
>  
>  #endif
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 06d650f61da7..8cee29d9bc07 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -128,7 +128,7 @@ static void __apply_alternatives(void *alt_region)
>  
>  		for (i = 0; i < nr_inst; i++) {
>  			insn = get_alt_insn(alt, origptr + i, replptr + i);
> -			*(origptr + i) = cpu_to_le32(insn);
> +			((u32 *)lm_alias(origptr))[i] = cpu_to_le32(insn);
>  		}
>  
>  		flush_icache_range((uintptr_t)origptr,
> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
> index a8ec5da530af..d6307e311a10 100644
> --- a/arch/arm64/kernel/smp.c
> +++ b/arch/arm64/kernel/smp.c
> @@ -432,6 +432,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
>  	setup_cpu_features();
>  	hyp_mode_check();
>  	apply_alternatives_all();
> +	mark_linear_text_alias_ro();
>  }
>  
>  void __init smp_prepare_boot_cpu(void)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9e0ec1a8cd3b..7ed981c7f4c0 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -398,16 +398,28 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
>  				     debug_pagealloc_enabled());
>  
>  	/*
> -	 * Map the linear alias of the [_text, __init_begin) interval as
> -	 * read-only/non-executable. This makes the contents of the
> -	 * region accessible to subsystems such as hibernate, but
> -	 * protects it from inadvertent modification or execution.
> +	 * Map the linear alias of the [_text, __init_begin) interval
> +	 * as non-executable now, and remove the write permission in
> +	 * mark_linear_text_alias_ro() below (which will be called after
> +	 * alternative patching has completed). This makes the contents
> +	 * of the region accessible to subsystems such as hibernate,
> +	 * but protects it from inadvertent modification or execution.
>  	 */
>  	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
> -			     kernel_end - kernel_start, PAGE_KERNEL_RO,
> +			     kernel_end - kernel_start, PAGE_KERNEL,
>  			     early_pgtable_alloc, debug_pagealloc_enabled());
>  }
>  
> +void __init mark_linear_text_alias_ro(void)
> +{
> +	/*
> +	 * Remove the write permissions from the linear alias of .text/.rodata
> +	 */
> +	create_mapping_late(__pa_symbol(_text), (unsigned long)lm_alias(_text),
> +			    (unsigned long)__init_begin - (unsigned long)_text,
> +			    PAGE_KERNEL_RO);
> +}
> +
>  static void __init map_mem(pgd_t *pgd)
>  {
>  	struct memblock_region *reg;
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
  2017-02-11 20:23   ` Ard Biesheuvel
  (?)
@ 2017-02-14 15:57     ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:57 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott

On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
> Now that alternatives patching code no longer relies on the primary
> mapping of .text being writable, we can remove the code that removes
> the writable permissions post-init time, and map it read-only from
> the outset.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

This generally looks good.

One effect of this is that even with rodata=off, external debuggers
can't install SW breakpoints via the executable mapping.

We might want to allow that to be overridden. e.g. make rodata= an
early param, and switch the permissions based on that in map_kernel(),
e.g. have:

	pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
					    : PAGE_KERNEL_EXEC);

... and use that for .text and .init.text by default.

Thanks,
Mark.

> ---
>  arch/arm64/mm/mmu.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 7ed981c7f4c0..e97f1ce967ec 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -442,9 +442,6 @@ void mark_rodata_ro(void)
>  {
>  	unsigned long section_size;
>  
> -	section_size = (unsigned long)_etext - (unsigned long)_text;
> -	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
> -			    section_size, PAGE_KERNEL_ROX);
>  	/*
>  	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
>  	 * to cover NOTES and EXCEPTION_TABLE.
> @@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
>  {
>  	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
>  
> -	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
> +	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
>  	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
>  	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
>  			   &vmlinux_init);
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 15:57     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
> Now that alternatives patching code no longer relies on the primary
> mapping of .text being writable, we can remove the code that removes
> the writable permissions post-init time, and map it read-only from
> the outset.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

This generally looks good.

One effect of this is that even with rodata=off, external debuggers
can't install SW breakpoints via the executable mapping.

We might want to allow that to be overridden. e.g. make rodata= an
early param, and switch the permissions based on that in map_kernel(),
e.g. have:

	pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
					    : PAGE_KERNEL_EXEC);

... and use that for .text and .init.text by default.

Thanks,
Mark.

> ---
>  arch/arm64/mm/mmu.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 7ed981c7f4c0..e97f1ce967ec 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -442,9 +442,6 @@ void mark_rodata_ro(void)
>  {
>  	unsigned long section_size;
>  
> -	section_size = (unsigned long)_etext - (unsigned long)_text;
> -	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
> -			    section_size, PAGE_KERNEL_ROX);
>  	/*
>  	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
>  	 * to cover NOTES and EXCEPTION_TABLE.
> @@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
>  {
>  	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
>  
> -	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
> +	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
>  	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
>  	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
>  			   &vmlinux_init);
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 15:57     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:57 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd

On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
> Now that alternatives patching code no longer relies on the primary
> mapping of .text being writable, we can remove the code that removes
> the writable permissions post-init time, and map it read-only from
> the outset.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Reviewed-by: Kees Cook <keescook@chromium.org>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

This generally looks good.

One effect of this is that even with rodata=off, external debuggers
can't install SW breakpoints via the executable mapping.

We might want to allow that to be overridden. e.g. make rodata= an
early param, and switch the permissions based on that in map_kernel(),
e.g. have:

	pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
					    : PAGE_KERNEL_EXEC);

... and use that for .text and .init.text by default.

Thanks,
Mark.

> ---
>  arch/arm64/mm/mmu.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 7ed981c7f4c0..e97f1ce967ec 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -442,9 +442,6 @@ void mark_rodata_ro(void)
>  {
>  	unsigned long section_size;
>  
> -	section_size = (unsigned long)_etext - (unsigned long)_text;
> -	create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
> -			    section_size, PAGE_KERNEL_ROX);
>  	/*
>  	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
>  	 * to cover NOTES and EXCEPTION_TABLE.
> @@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
>  {
>  	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
>  
> -	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
> +	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
>  	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
>  	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
>  			   &vmlinux_init);
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 5/5] arm64: mmu: apply strict permissions to .init.text and .init.data
  2017-02-11 20:23   ` Ard Biesheuvel
  (?)
@ 2017-02-14 15:57     ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:57 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott

On Sat, Feb 11, 2017 at 08:23:06PM +0000, Ard Biesheuvel wrote:
> To avoid having mappings that are writable and executable at the same
> time, split the init region into a .init.text region that is mapped
> read-only, and a .init.data region that is mapped non-executable.
> 
> This is possible now that the alternative patching occurs via the linear
> mapping, and the linear alias of the init region is always mapped writable
> (but never executable).
> 
> Since the alternatives descriptions themselves are read-only data, move
> those into the .init.text region.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

This generally looks good.

As with my comment on patch 4, we might want to allow .init.text to be
mapped writeable for the sake of external debuggers.

Thanks,
Mark.

> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index e97f1ce967ec..c53c43b4ed3f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -479,12 +479,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
>   */
>  static void __init map_kernel(pgd_t *pgd)
>  {
> -	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
> +	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
> +				vmlinux_initdata, vmlinux_data;
>  
>  	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
> -	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
> -	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
> -			   &vmlinux_init);
> +	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
> +			   &vmlinux_rodata);
> +	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
> +			   &vmlinux_inittext);
> +	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
> +			   &vmlinux_initdata);
>  	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
>  
>  	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 5/5] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-14 15:57     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Sat, Feb 11, 2017 at 08:23:06PM +0000, Ard Biesheuvel wrote:
> To avoid having mappings that are writable and executable at the same
> time, split the init region into a .init.text region that is mapped
> read-only, and a .init.data region that is mapped non-executable.
> 
> This is possible now that the alternative patching occurs via the linear
> mapping, and the linear alias of the init region is always mapped writable
> (but never executable).
> 
> Since the alternatives descriptions themselves are read-only data, move
> those into the .init.text region.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

This generally looks good.

As with my comment on patch 4, we might want to allow .init.text to be
mapped writeable for the sake of external debuggers.

Thanks,
Mark.

> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index e97f1ce967ec..c53c43b4ed3f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -479,12 +479,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
>   */
>  static void __init map_kernel(pgd_t *pgd)
>  {
> -	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
> +	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
> +				vmlinux_initdata, vmlinux_data;
>  
>  	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
> -	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
> -	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
> -			   &vmlinux_init);
> +	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
> +			   &vmlinux_rodata);
> +	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
> +			   &vmlinux_inittext);
> +	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
> +			   &vmlinux_initdata);
>  	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
>  
>  	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 5/5] arm64: mmu: apply strict permissions to .init.text and .init.data
@ 2017-02-14 15:57     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 15:57 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd

On Sat, Feb 11, 2017 at 08:23:06PM +0000, Ard Biesheuvel wrote:
> To avoid having mappings that are writable and executable at the same
> time, split the init region into a .init.text region that is mapped
> read-only, and a .init.data region that is mapped non-executable.
> 
> This is possible now that the alternative patching occurs via the linear
> mapping, and the linear alias of the init region is always mapped writable
> (but never executable).
> 
> Since the alternatives descriptions themselves are read-only data, move
> those into the .init.text region.
> 
> Reviewed-by: Laura Abbott <labbott@redhat.com>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

This generally looks good.

As with my comment on patch 4, we might want to allow .init.text to be
mapped writeable for the sake of external debuggers.

Thanks,
Mark.

> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index e97f1ce967ec..c53c43b4ed3f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -479,12 +479,16 @@ static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
>   */
>  static void __init map_kernel(pgd_t *pgd)
>  {
> -	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
> +	static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_inittext,
> +				vmlinux_initdata, vmlinux_data;
>  
>  	map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
> -	map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
> -	map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
> -			   &vmlinux_init);
> +	map_kernel_segment(pgd, __start_rodata, __inittext_begin, PAGE_KERNEL,
> +			   &vmlinux_rodata);
> +	map_kernel_segment(pgd, __inittext_begin, __inittext_end, PAGE_KERNEL_ROX,
> +			   &vmlinux_inittext);
> +	map_kernel_segment(pgd, __initdata_begin, __initdata_end, PAGE_KERNEL,
> +			   &vmlinux_initdata);
>  	map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
>  
>  	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
  2017-02-14 15:57     ` Mark Rutland
  (?)
@ 2017-02-14 16:15       ` Ard Biesheuvel
  -1 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-14 16:15 UTC (permalink / raw)
  To: Mark Rutland
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott


> On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
> 
>> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
>> Now that alternatives patching code no longer relies on the primary
>> mapping of .text being writable, we can remove the code that removes
>> the writable permissions post-init time, and map it read-only from
>> the outset.
>> 
>> Reviewed-by: Laura Abbott <labbott@redhat.com>
>> Reviewed-by: Kees Cook <keescook@chromium.org>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> 
> This generally looks good.
> 
> One effect of this is that even with rodata=off, external debuggers
> can't install SW breakpoints via the executable mapping.
> 

Interesting. For the sake of my education, could you elaborate on how that works under the hood?

> We might want to allow that to be overridden. e.g. make rodata= an
> early param, and switch the permissions based on that in map_kernel(),
> e.g. have:
> 
>    pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
>                        : PAGE_KERNEL_EXEC);
> 
> ... and use that for .text and .init.text by default.
> 
> 

Is there any way we could restrict this privilege to external debuggers? Having trivial 'off' switches for security features makes me feel uneasy (although this is orthogonal to this patch)
>> ---
>> arch/arm64/mm/mmu.c | 5 +----
>> 1 file changed, 1 insertion(+), 4 deletions(-)
>> 
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 7ed981c7f4c0..e97f1ce967ec 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -442,9 +442,6 @@ void mark_rodata_ro(void)
>> {
>>    unsigned long section_size;
>> 
>> -    section_size = (unsigned long)_etext - (unsigned long)_text;
>> -    create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
>> -                section_size, PAGE_KERNEL_ROX);
>>    /*
>>     * mark .rodata as read only. Use __init_begin rather than __end_rodata
>>     * to cover NOTES and EXCEPTION_TABLE.
>> @@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
>> {
>>    static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
>> 
>> -    map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
>> +    map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
>>    map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
>>    map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
>>               &vmlinux_init);
>> -- 
>> 2.7.4
>> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 16:15       ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-14 16:15 UTC (permalink / raw)
  To: linux-arm-kernel


> On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
> 
>> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
>> Now that alternatives patching code no longer relies on the primary
>> mapping of .text being writable, we can remove the code that removes
>> the writable permissions post-init time, and map it read-only from
>> the outset.
>> 
>> Reviewed-by: Laura Abbott <labbott@redhat.com>
>> Reviewed-by: Kees Cook <keescook@chromium.org>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> 
> This generally looks good.
> 
> One effect of this is that even with rodata=off, external debuggers
> can't install SW breakpoints via the executable mapping.
> 

Interesting. For the sake of my education, could you elaborate on how that works under the hood?

> We might want to allow that to be overridden. e.g. make rodata= an
> early param, and switch the permissions based on that in map_kernel(),
> e.g. have:
> 
>    pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
>                        : PAGE_KERNEL_EXEC);
> 
> ... and use that for .text and .init.text by default.
> 
> 

Is there any way we could restrict this privilege to external debuggers? Having trivial 'off' switches for security features makes me feel uneasy (although this is orthogonal to this patch)
>> ---
>> arch/arm64/mm/mmu.c | 5 +----
>> 1 file changed, 1 insertion(+), 4 deletions(-)
>> 
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 7ed981c7f4c0..e97f1ce967ec 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -442,9 +442,6 @@ void mark_rodata_ro(void)
>> {
>>    unsigned long section_size;
>> 
>> -    section_size = (unsigned long)_etext - (unsigned long)_text;
>> -    create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
>> -                section_size, PAGE_KERNEL_ROX);
>>    /*
>>     * mark .rodata as read only. Use __init_begin rather than __end_rodata
>>     * to cover NOTES and EXCEPTION_TABLE.
>> @@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
>> {
>>    static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
>> 
>> -    map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
>> +    map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
>>    map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
>>    map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
>>               &vmlinux_init);
>> -- 
>> 2.7.4
>> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 16:15       ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-14 16:15 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd


> On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
> 
>> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
>> Now that alternatives patching code no longer relies on the primary
>> mapping of .text being writable, we can remove the code that removes
>> the writable permissions post-init time, and map it read-only from
>> the outset.
>> 
>> Reviewed-by: Laura Abbott <labbott@redhat.com>
>> Reviewed-by: Kees Cook <keescook@chromium.org>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> 
> This generally looks good.
> 
> One effect of this is that even with rodata=off, external debuggers
> can't install SW breakpoints via the executable mapping.
> 

Interesting. For the sake of my education, could you elaborate on how that works under the hood?

> We might want to allow that to be overridden. e.g. make rodata= an
> early param, and switch the permissions based on that in map_kernel(),
> e.g. have:
> 
>    pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
>                        : PAGE_KERNEL_EXEC);
> 
> ... and use that for .text and .init.text by default.
> 
> 

Is there any way we could restrict this privilege to external debuggers? Having trivial 'off' switches for security features makes me feel uneasy (although this is orthogonal to this patch)
>> ---
>> arch/arm64/mm/mmu.c | 5 +----
>> 1 file changed, 1 insertion(+), 4 deletions(-)
>> 
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 7ed981c7f4c0..e97f1ce967ec 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -442,9 +442,6 @@ void mark_rodata_ro(void)
>> {
>>    unsigned long section_size;
>> 
>> -    section_size = (unsigned long)_etext - (unsigned long)_text;
>> -    create_mapping_late(__pa_symbol(_text), (unsigned long)_text,
>> -                section_size, PAGE_KERNEL_ROX);
>>    /*
>>     * mark .rodata as read only. Use __init_begin rather than __end_rodata
>>     * to cover NOTES and EXCEPTION_TABLE.
>> @@ -484,7 +481,7 @@ static void __init map_kernel(pgd_t *pgd)
>> {
>>    static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
>> 
>> -    map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
>> +    map_kernel_segment(pgd, _text, _etext, PAGE_KERNEL_ROX, &vmlinux_text);
>>    map_kernel_segment(pgd, __start_rodata, __init_begin, PAGE_KERNEL, &vmlinux_rodata);
>>    map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
>>               &vmlinux_init);
>> -- 
>> 2.7.4
>> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
  2017-02-14 16:15       ` Ard Biesheuvel
  (?)
@ 2017-02-14 17:40         ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 17:40 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott

On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:
> 
> > On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> >> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
> >> Now that alternatives patching code no longer relies on the primary
> >> mapping of .text being writable, we can remove the code that removes
> >> the writable permissions post-init time, and map it read-only from
> >> the outset.
> >> 
> >> Reviewed-by: Laura Abbott <labbott@redhat.com>
> >> Reviewed-by: Kees Cook <keescook@chromium.org>
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > 
> > This generally looks good.
> > 
> > One effect of this is that even with rodata=off, external debuggers
> > can't install SW breakpoints via the executable mapping.
> 
> Interesting. For the sake of my education, could you elaborate on how
> that works under the hood?

There are details in ARM DDI 0487A.k_iss10775, Chapter H1, "About
External Debug", page H1-4839 onwards. Otherwise, executive summary
below.

An external debugger can place a CPU into debug state. This is
orthogonal to execution state and exception level, which are unchanged.
While in this state, the CPU (only) executes instructions fed to it by
the debugger through a special register.

To install a SW breakpoint, the debugger makes the CPU enter debug
state, then issues regular stores, barriers, and cache maintenance.
These operate in the current execution state at the current EL, using
the current translation regime.

The external debugger can also trap exceptions (e.g. those caused by the
SW breakpoint). The CPU enters debug state when these are trapped.

> > We might want to allow that to be overridden. e.g. make rodata= an
> > early param, and switch the permissions based on that in map_kernel(),
> > e.g. have:
> > 
> >    pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
> >                        : PAGE_KERNEL_EXEC);
> > 
> > ... and use that for .text and .init.text by default.
> > 
> > 
> 
> Is there any way we could restrict this privilege to external
> debuggers?

My understanding is that we cannot.

> Having trivial 'off' switches for security features makes me feel
> uneasy (although this is orthogonal to this patch)

>From my PoV, external debuggers are the sole reason to allow rodata=off
for arm64, and we already allow rodata=off.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 17:40         ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 17:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:
> 
> > On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> >> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
> >> Now that alternatives patching code no longer relies on the primary
> >> mapping of .text being writable, we can remove the code that removes
> >> the writable permissions post-init time, and map it read-only from
> >> the outset.
> >> 
> >> Reviewed-by: Laura Abbott <labbott@redhat.com>
> >> Reviewed-by: Kees Cook <keescook@chromium.org>
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > 
> > This generally looks good.
> > 
> > One effect of this is that even with rodata=off, external debuggers
> > can't install SW breakpoints via the executable mapping.
> 
> Interesting. For the sake of my education, could you elaborate on how
> that works under the hood?

There are details in ARM DDI 0487A.k_iss10775, Chapter H1, "About
External Debug", page H1-4839 onwards. Otherwise, executive summary
below.

An external debugger can place a CPU into debug state. This is
orthogonal to execution state and exception level, which are unchanged.
While in this state, the CPU (only) executes instructions fed to it by
the debugger through a special register.

To install a SW breakpoint, the debugger makes the CPU enter debug
state, then issues regular stores, barriers, and cache maintenance.
These operate in the current execution state at the current EL, using
the current translation regime.

The external debugger can also trap exceptions (e.g. those caused by the
SW breakpoint). The CPU enters debug state when these are trapped.

> > We might want to allow that to be overridden. e.g. make rodata= an
> > early param, and switch the permissions based on that in map_kernel(),
> > e.g. have:
> > 
> >    pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
> >                        : PAGE_KERNEL_EXEC);
> > 
> > ... and use that for .text and .init.text by default.
> > 
> > 
> 
> Is there any way we could restrict this privilege to external
> debuggers?

My understanding is that we cannot.

> Having trivial 'off' switches for security features makes me feel
> uneasy (although this is orthogonal to this patch)

>From my PoV, external debuggers are the sole reason to allow rodata=off
for arm64, and we already allow rodata=off.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 17:40         ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 17:40 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd

On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:
> 
> > On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> >> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
> >> Now that alternatives patching code no longer relies on the primary
> >> mapping of .text being writable, we can remove the code that removes
> >> the writable permissions post-init time, and map it read-only from
> >> the outset.
> >> 
> >> Reviewed-by: Laura Abbott <labbott@redhat.com>
> >> Reviewed-by: Kees Cook <keescook@chromium.org>
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > 
> > This generally looks good.
> > 
> > One effect of this is that even with rodata=off, external debuggers
> > can't install SW breakpoints via the executable mapping.
> 
> Interesting. For the sake of my education, could you elaborate on how
> that works under the hood?

There are details in ARM DDI 0487A.k_iss10775, Chapter H1, "About
External Debug", page H1-4839 onwards. Otherwise, executive summary
below.

An external debugger can place a CPU into debug state. This is
orthogonal to execution state and exception level, which are unchanged.
While in this state, the CPU (only) executes instructions fed to it by
the debugger through a special register.

To install a SW breakpoint, the debugger makes the CPU enter debug
state, then issues regular stores, barriers, and cache maintenance.
These operate in the current execution state at the current EL, using
the current translation regime.

The external debugger can also trap exceptions (e.g. those caused by the
SW breakpoint). The CPU enters debug state when these are trapped.

> > We might want to allow that to be overridden. e.g. make rodata= an
> > early param, and switch the permissions based on that in map_kernel(),
> > e.g. have:
> > 
> >    pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
> >                        : PAGE_KERNEL_EXEC);
> > 
> > ... and use that for .text and .init.text by default.
> > 
> > 
> 
> Is there any way we could restrict this privilege to external
> debuggers?

My understanding is that we cannot.

> Having trivial 'off' switches for security features makes me feel
> uneasy (although this is orthogonal to this patch)

>From my PoV, external debuggers are the sole reason to allow rodata=off
for arm64, and we already allow rodata=off.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
  2017-02-14 17:40         ` Mark Rutland
  (?)
@ 2017-02-14 17:49           ` Ard Biesheuvel
  -1 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-14 17:49 UTC (permalink / raw)
  To: Mark Rutland
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott


> On 14 Feb 2017, at 17:40, Mark Rutland <mark.rutland@arm.com> wrote:
> 
>> On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:
>> 
>>>> On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
>>>> 
>>>> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
>>>> Now that alternatives patching code no longer relies on the primary
>>>> mapping of .text being writable, we can remove the code that removes
>>>> the writable permissions post-init time, and map it read-only from
>>>> the outset.
>>>> 
>>>> Reviewed-by: Laura Abbott <labbott@redhat.com>
>>>> Reviewed-by: Kees Cook <keescook@chromium.org>
>>>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>> 
>>> This generally looks good.
>>> 
>>> One effect of this is that even with rodata=off, external debuggers
>>> can't install SW breakpoints via the executable mapping.
>> 
>> Interesting. For the sake of my education, could you elaborate on how
>> that works under the hood?
> 
> There are details in ARM DDI 0487A.k_iss10775, Chapter H1, "About
> External Debug", page H1-4839 onwards. Otherwise, executive summary
> below.
> 
> An external debugger can place a CPU into debug state. This is
> orthogonal to execution state and exception level, which are unchanged.
> While in this state, the CPU (only) executes instructions fed to it by
> the debugger through a special register.
> 
> To install a SW breakpoint, the debugger makes the CPU enter debug
> state, then issues regular stores, barriers, and cache maintenance.
> These operate in the current execution state at the current EL, using
> the current translation regime.
> 
> The external debugger can also trap exceptions (e.g. those caused by the
> SW breakpoint). The CPU enters debug state when these are trapped.
> 

OK, thanks for the explanation

>>> We might want to allow that to be overridden. e.g. make rodata= an
>>> early param, and switch the permissions based on that in map_kernel(),
>>> e.g. have:
>>> 
>>>   pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
>>>                       : PAGE_KERNEL_EXEC);
>>> 
>>> ... and use that for .text and .init.text by default.
>>> 
>>> 
>> 
>> Is there any way we could restrict this privilege to external
>> debuggers?
> 
> My understanding is that we cannot.
> 
>> Having trivial 'off' switches for security features makes me feel
>> uneasy (although this is orthogonal to this patch)
> 
> From my PoV, external debuggers are the sole reason to allow rodata=off
> for arm64, and we already allow rodata=off.
> 
> 

Indeed. If that is how it works currently, we shouldn't interfere with it. If we ever get anywhere with the lockdown patches, we should blacklist this parameter (or rather, not whitelist it, since blacklisting kernel params to enforce security is infeasible imo)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 17:49           ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-14 17:49 UTC (permalink / raw)
  To: linux-arm-kernel


> On 14 Feb 2017, at 17:40, Mark Rutland <mark.rutland@arm.com> wrote:
> 
>> On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:
>> 
>>>> On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
>>>> 
>>>> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
>>>> Now that alternatives patching code no longer relies on the primary
>>>> mapping of .text being writable, we can remove the code that removes
>>>> the writable permissions post-init time, and map it read-only from
>>>> the outset.
>>>> 
>>>> Reviewed-by: Laura Abbott <labbott@redhat.com>
>>>> Reviewed-by: Kees Cook <keescook@chromium.org>
>>>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>> 
>>> This generally looks good.
>>> 
>>> One effect of this is that even with rodata=off, external debuggers
>>> can't install SW breakpoints via the executable mapping.
>> 
>> Interesting. For the sake of my education, could you elaborate on how
>> that works under the hood?
> 
> There are details in ARM DDI 0487A.k_iss10775, Chapter H1, "About
> External Debug", page H1-4839 onwards. Otherwise, executive summary
> below.
> 
> An external debugger can place a CPU into debug state. This is
> orthogonal to execution state and exception level, which are unchanged.
> While in this state, the CPU (only) executes instructions fed to it by
> the debugger through a special register.
> 
> To install a SW breakpoint, the debugger makes the CPU enter debug
> state, then issues regular stores, barriers, and cache maintenance.
> These operate in the current execution state at the current EL, using
> the current translation regime.
> 
> The external debugger can also trap exceptions (e.g. those caused by the
> SW breakpoint). The CPU enters debug state when these are trapped.
> 

OK, thanks for the explanation

>>> We might want to allow that to be overridden. e.g. make rodata= an
>>> early param, and switch the permissions based on that in map_kernel(),
>>> e.g. have:
>>> 
>>>   pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
>>>                       : PAGE_KERNEL_EXEC);
>>> 
>>> ... and use that for .text and .init.text by default.
>>> 
>>> 
>> 
>> Is there any way we could restrict this privilege to external
>> debuggers?
> 
> My understanding is that we cannot.
> 
>> Having trivial 'off' switches for security features makes me feel
>> uneasy (although this is orthogonal to this patch)
> 
> From my PoV, external debuggers are the sole reason to allow rodata=off
> for arm64, and we already allow rodata=off.
> 
> 

Indeed. If that is how it works currently, we shouldn't interfere with it. If we ever get anywhere with the lockdown patches, we should blacklist this parameter (or rather, not whitelist it, since blacklisting kernel params to enforce security is infeasible imo)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 17:49           ` Ard Biesheuvel
  0 siblings, 0 replies; 44+ messages in thread
From: Ard Biesheuvel @ 2017-02-14 17:49 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd


> On 14 Feb 2017, at 17:40, Mark Rutland <mark.rutland@arm.com> wrote:
> 
>> On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:
>> 
>>>> On 14 Feb 2017, at 15:57, Mark Rutland <mark.rutland@arm.com> wrote:
>>>> 
>>>> On Sat, Feb 11, 2017 at 08:23:05PM +0000, Ard Biesheuvel wrote:
>>>> Now that alternatives patching code no longer relies on the primary
>>>> mapping of .text being writable, we can remove the code that removes
>>>> the writable permissions post-init time, and map it read-only from
>>>> the outset.
>>>> 
>>>> Reviewed-by: Laura Abbott <labbott@redhat.com>
>>>> Reviewed-by: Kees Cook <keescook@chromium.org>
>>>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>> 
>>> This generally looks good.
>>> 
>>> One effect of this is that even with rodata=off, external debuggers
>>> can't install SW breakpoints via the executable mapping.
>> 
>> Interesting. For the sake of my education, could you elaborate on how
>> that works under the hood?
> 
> There are details in ARM DDI 0487A.k_iss10775, Chapter H1, "About
> External Debug", page H1-4839 onwards. Otherwise, executive summary
> below.
> 
> An external debugger can place a CPU into debug state. This is
> orthogonal to execution state and exception level, which are unchanged.
> While in this state, the CPU (only) executes instructions fed to it by
> the debugger through a special register.
> 
> To install a SW breakpoint, the debugger makes the CPU enter debug
> state, then issues regular stores, barriers, and cache maintenance.
> These operate in the current execution state at the current EL, using
> the current translation regime.
> 
> The external debugger can also trap exceptions (e.g. those caused by the
> SW breakpoint). The CPU enters debug state when these are trapped.
> 

OK, thanks for the explanation

>>> We might want to allow that to be overridden. e.g. make rodata= an
>>> early param, and switch the permissions based on that in map_kernel(),
>>> e.g. have:
>>> 
>>>   pgprot_t text_prot = rodata_enabled ? PAGE_KERNEL_ROX
>>>                       : PAGE_KERNEL_EXEC);
>>> 
>>> ... and use that for .text and .init.text by default.
>>> 
>>> 
>> 
>> Is there any way we could restrict this privilege to external
>> debuggers?
> 
> My understanding is that we cannot.
> 
>> Having trivial 'off' switches for security features makes me feel
>> uneasy (although this is orthogonal to this patch)
> 
> From my PoV, external debuggers are the sole reason to allow rodata=off
> for arm64, and we already allow rodata=off.
> 
> 

Indeed. If that is how it works currently, we shouldn't interfere with it. If we ever get anywhere with the lockdown patches, we should blacklist this parameter (or rather, not whitelist it, since blacklisting kernel params to enforce security is infeasible imo)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
  2017-02-14 17:49           ` Ard Biesheuvel
  (?)
@ 2017-02-14 17:54             ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 17:54 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: keescook, marc.zyngier, catalin.marinas, kernel-hardening,
	will.deacon, andre.przywara, nd, kvmarm, linux-arm-kernel,
	labbott

On Tue, Feb 14, 2017 at 05:49:19PM +0000, Ard Biesheuvel wrote:
> 
> > On 14 Feb 2017, at 17:40, Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> >> On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:

> >> Having trivial 'off' switches for security features makes me feel
> >> uneasy (although this is orthogonal to this patch)
> > 
> > From my PoV, external debuggers are the sole reason to allow rodata=off
> > for arm64, and we already allow rodata=off.
> > 
> > 
> 
> Indeed. If that is how it works currently, we shouldn't interfere with
> it. If we ever get anywhere with the lockdown patches, we should
> blacklist this parameter (or rather, not whitelist it, since
> blacklisting kernel params to enforce security is infeasible imo)

Agreed on all counts!

Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 17:54             ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 17:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 14, 2017 at 05:49:19PM +0000, Ard Biesheuvel wrote:
> 
> > On 14 Feb 2017, at 17:40, Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> >> On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:

> >> Having trivial 'off' switches for security features makes me feel
> >> uneasy (although this is orthogonal to this patch)
> > 
> > From my PoV, external debuggers are the sole reason to allow rodata=off
> > for arm64, and we already allow rodata=off.
> > 
> > 
> 
> Indeed. If that is how it works currently, we shouldn't interfere with
> it. If we ever get anywhere with the lockdown patches, we should
> blacklist this parameter (or rather, not whitelist it, since
> blacklisting kernel params to enforce security is infeasible imo)

Agreed on all counts!

Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [kernel-hardening] Re: [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset
@ 2017-02-14 17:54             ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-02-14 17:54 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, catalin.marinas, will.deacon, labbott, kvmarm,
	marc.zyngier, andre.przywara, Suzuki.Poulose, james.morse,
	keescook, kernel-hardening, nd

On Tue, Feb 14, 2017 at 05:49:19PM +0000, Ard Biesheuvel wrote:
> 
> > On 14 Feb 2017, at 17:40, Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> >> On Tue, Feb 14, 2017 at 04:15:11PM +0000, Ard Biesheuvel wrote:

> >> Having trivial 'off' switches for security features makes me feel
> >> uneasy (although this is orthogonal to this patch)
> > 
> > From my PoV, external debuggers are the sole reason to allow rodata=off
> > for arm64, and we already allow rodata=off.
> > 
> > 
> 
> Indeed. If that is how it works currently, we shouldn't interfere with
> it. If we ever get anywhere with the lockdown patches, we should
> blacklist this parameter (or rather, not whitelist it, since
> blacklisting kernel params to enforce security is infeasible imo)

Agreed on all counts!

Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2017-02-14 17:54 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-11 20:23 [PATCH v2 0/5] arm64: mmu: avoid writeable-executable mappings Ard Biesheuvel
2017-02-11 20:23 ` [kernel-hardening] " Ard Biesheuvel
2017-02-11 20:23 ` Ard Biesheuvel
2017-02-11 20:23 ` [PATCH v2 1/5] arm: kvm: move kvm_vgic_global_state out of .text section Ard Biesheuvel
2017-02-11 20:23   ` [kernel-hardening] " Ard Biesheuvel
2017-02-11 20:23   ` Ard Biesheuvel
2017-02-13 17:58   ` Mark Rutland
2017-02-13 17:58     ` [kernel-hardening] " Mark Rutland
2017-02-11 20:23 ` [PATCH v2 2/5] arm64: mmu: move TLB maintenance from callers to create_mapping_late() Ard Biesheuvel
2017-02-11 20:23   ` [kernel-hardening] " Ard Biesheuvel
2017-02-11 20:23   ` Ard Biesheuvel
2017-02-14 15:54   ` Mark Rutland
2017-02-14 15:54     ` [kernel-hardening] " Mark Rutland
2017-02-14 15:54     ` Mark Rutland
2017-02-11 20:23 ` [PATCH v2 3/5] arm64: alternatives: apply boot time fixups via the linear mapping Ard Biesheuvel
2017-02-11 20:23   ` [kernel-hardening] " Ard Biesheuvel
2017-02-11 20:23   ` Ard Biesheuvel
2017-02-14 15:56   ` Mark Rutland
2017-02-14 15:56     ` [kernel-hardening] " Mark Rutland
2017-02-14 15:56     ` Mark Rutland
2017-02-11 20:23 ` [PATCH v2 4/5] arm64: mmu: map .text as read-only from the outset Ard Biesheuvel
2017-02-11 20:23   ` [kernel-hardening] " Ard Biesheuvel
2017-02-11 20:23   ` Ard Biesheuvel
2017-02-14 15:57   ` Mark Rutland
2017-02-14 15:57     ` [kernel-hardening] " Mark Rutland
2017-02-14 15:57     ` Mark Rutland
2017-02-14 16:15     ` Ard Biesheuvel
2017-02-14 16:15       ` [kernel-hardening] " Ard Biesheuvel
2017-02-14 16:15       ` Ard Biesheuvel
2017-02-14 17:40       ` Mark Rutland
2017-02-14 17:40         ` [kernel-hardening] " Mark Rutland
2017-02-14 17:40         ` Mark Rutland
2017-02-14 17:49         ` Ard Biesheuvel
2017-02-14 17:49           ` [kernel-hardening] " Ard Biesheuvel
2017-02-14 17:49           ` Ard Biesheuvel
2017-02-14 17:54           ` Mark Rutland
2017-02-14 17:54             ` [kernel-hardening] " Mark Rutland
2017-02-14 17:54             ` Mark Rutland
2017-02-11 20:23 ` [PATCH v2 5/5] arm64: mmu: apply strict permissions to .init.text and .init.data Ard Biesheuvel
2017-02-11 20:23   ` [kernel-hardening] " Ard Biesheuvel
2017-02-11 20:23   ` Ard Biesheuvel
2017-02-14 15:57   ` Mark Rutland
2017-02-14 15:57     ` [kernel-hardening] " Mark Rutland
2017-02-14 15:57     ` Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.