* [PATCH 0/7] arm64: move literal data into .rodata section
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: herbert, will.deacon, catalin.marinas, marc.zyngier,
mark.rutland, dann.frazier, steve.capper, Ard Biesheuvel
Prevent inadvertently creating speculative gadgets by moving literal data
into the .rodata section.
Patch #1 enables this for C code, by reverting a change that disables the
GCC feature implementing this. Note that this conflicts with the mitigation
of erratum #843419 for Cortex-A53.
Patch #2 - #7 update the crypto asm code to move sboxes and round constant
tables (which may or may not be hiding 'interesting' opcodes) from .text
to .rodata
Ard Biesheuvel (7):
arm64: kernel: avoid executable literal pools
arm64/crypto: aes-cipher: move S-box to .rodata section
arm64/crypto: aes-neon: move literal data to .rodata section
arm64/crypto: crc32: move literal data to .rodata section
arm64/crypto: crct10dif: move literal data to .rodata section
arm64/crypto: sha2-ce: move the round constant table to .rodata
section
arm64/crypto: sha1-ce: get rid of literal pool
arch/arm64/Makefile | 4 ++--
arch/arm64/crypto/aes-cipher-core.S | 19 ++++++++++---------
arch/arm64/crypto/aes-neon.S | 8 ++++----
arch/arm64/crypto/crc32-ce-core.S | 7 ++++---
arch/arm64/crypto/crct10dif-ce-core.S | 17 +++++++++--------
arch/arm64/crypto/sha1-ce-core.S | 20 +++++++++-----------
arch/arm64/crypto/sha2-ce-core.S | 4 +++-
7 files changed, 41 insertions(+), 38 deletions(-)
--
2.11.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 0/7] arm64: move literal data into .rodata section
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Prevent inadvertently creating speculative gadgets by moving literal data
into the .rodata section.
Patch #1 enables this for C code, by reverting a change that disables the
GCC feature implementing this. Note that this conflicts with the mitigation
of erratum #843419 for Cortex-A53.
Patch #2 - #7 update the crypto asm code to move sboxes and round constant
tables (which may or may not be hiding 'interesting' opcodes) from .text
to .rodata
Ard Biesheuvel (7):
arm64: kernel: avoid executable literal pools
arm64/crypto: aes-cipher: move S-box to .rodata section
arm64/crypto: aes-neon: move literal data to .rodata section
arm64/crypto: crc32: move literal data to .rodata section
arm64/crypto: crct10dif: move literal data to .rodata section
arm64/crypto: sha2-ce: move the round constant table to .rodata
section
arm64/crypto: sha1-ce: get rid of literal pool
arch/arm64/Makefile | 4 ++--
arch/arm64/crypto/aes-cipher-core.S | 19 ++++++++++---------
arch/arm64/crypto/aes-neon.S | 8 ++++----
arch/arm64/crypto/crc32-ce-core.S | 7 ++++---
arch/arm64/crypto/crct10dif-ce-core.S | 17 +++++++++--------
arch/arm64/crypto/sha1-ce-core.S | 20 +++++++++-----------
arch/arm64/crypto/sha2-ce-core.S | 4 +++-
7 files changed, 41 insertions(+), 38 deletions(-)
--
2.11.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 1/7] arm64: kernel: avoid executable literal pools
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-10 12:11 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: herbert, will.deacon, catalin.marinas, marc.zyngier,
mark.rutland, dann.frazier, steve.capper, Ard Biesheuvel
Recent versions of GCC will emit literals into a separate .rodata section
rather than interspersed with the instruction stream. We disabled this
in commit 67dfa1751ce71 ("arm64: errata: Add -mpc-relative-literal-loads
to build flags"), because it uses adrp/add pairs to reference these
literals even when building with -mcmodel=large, which breaks module
loading when we have the mitigation for Cortex-A53 erratum #843419
enabled.
However, due to the recent discoveries regarding speculative execution,
we should avoid putting data into executable sections, to prevent
creating speculative gadgets inadvertently.
So set -mpc-relative-literal-loads only for modules, and only if the
A53 erratum is enabled.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index b481b4a7c011..bd7cb205e28a 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -26,7 +26,8 @@ ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
ifeq ($(call ld-option, --fix-cortex-a53-843419),)
$(warning ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum)
else
-LDFLAGS_vmlinux += --fix-cortex-a53-843419
+LDFLAGS_vmlinux += --fix-cortex-a53-843419
+KBUILD_CFLAGS_MODULE += $(call cc-option, -mpc-relative-literal-loads)
endif
endif
@@ -51,7 +52,6 @@ endif
KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst)
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
-KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads)
KBUILD_AFLAGS += $(lseinstr) $(brokengasinst)
KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 1/7] arm64: kernel: avoid executable literal pools
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Recent versions of GCC will emit literals into a separate .rodata section
rather than interspersed with the instruction stream. We disabled this
in commit 67dfa1751ce71 ("arm64: errata: Add -mpc-relative-literal-loads
to build flags"), because it uses adrp/add pairs to reference these
literals even when building with -mcmodel=large, which breaks module
loading when we have the mitigation for Cortex-A53 erratum #843419
enabled.
However, due to the recent discoveries regarding speculative execution,
we should avoid putting data into executable sections, to prevent
creating speculative gadgets inadvertently.
So set -mpc-relative-literal-loads only for modules, and only if the
A53 erratum is enabled.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index b481b4a7c011..bd7cb205e28a 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -26,7 +26,8 @@ ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
ifeq ($(call ld-option, --fix-cortex-a53-843419),)
$(warning ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum)
else
-LDFLAGS_vmlinux += --fix-cortex-a53-843419
+LDFLAGS_vmlinux += --fix-cortex-a53-843419
+KBUILD_CFLAGS_MODULE += $(call cc-option, -mpc-relative-literal-loads)
endif
endif
@@ -51,7 +52,6 @@ endif
KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst)
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
-KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads)
KBUILD_AFLAGS += $(lseinstr) $(brokengasinst)
KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 2/7] arm64/crypto: aes-cipher: move S-box to .rodata section
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-10 12:11 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: herbert, will.deacon, catalin.marinas, marc.zyngier,
mark.rutland, dann.frazier, steve.capper, Ard Biesheuvel
Move the AES inverse S-box to the .rodata section where it is safe from
abuse by speculation.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/aes-cipher-core.S | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/crypto/aes-cipher-core.S b/arch/arm64/crypto/aes-cipher-core.S
index 6d2445d603cc..3a44eada2347 100644
--- a/arch/arm64/crypto/aes-cipher-core.S
+++ b/arch/arm64/crypto/aes-cipher-core.S
@@ -125,6 +125,16 @@ CPU_BE( rev w7, w7 )
ret
.endm
+ENTRY(__aes_arm64_encrypt)
+ do_crypt fround, crypto_ft_tab, crypto_ft_tab + 1, 2
+ENDPROC(__aes_arm64_encrypt)
+
+ .align 5
+ENTRY(__aes_arm64_decrypt)
+ do_crypt iround, crypto_it_tab, __aes_arm64_inverse_sbox, 0
+ENDPROC(__aes_arm64_decrypt)
+
+ .section ".rodata", "a"
.align L1_CACHE_SHIFT
.type __aes_arm64_inverse_sbox, %object
__aes_arm64_inverse_sbox:
@@ -161,12 +171,3 @@ __aes_arm64_inverse_sbox:
.byte 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26
.byte 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d
.size __aes_arm64_inverse_sbox, . - __aes_arm64_inverse_sbox
-
-ENTRY(__aes_arm64_encrypt)
- do_crypt fround, crypto_ft_tab, crypto_ft_tab + 1, 2
-ENDPROC(__aes_arm64_encrypt)
-
- .align 5
-ENTRY(__aes_arm64_decrypt)
- do_crypt iround, crypto_it_tab, __aes_arm64_inverse_sbox, 0
-ENDPROC(__aes_arm64_decrypt)
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 2/7] arm64/crypto: aes-cipher: move S-box to .rodata section
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Move the AES inverse S-box to the .rodata section where it is safe from
abuse by speculation.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/aes-cipher-core.S | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/crypto/aes-cipher-core.S b/arch/arm64/crypto/aes-cipher-core.S
index 6d2445d603cc..3a44eada2347 100644
--- a/arch/arm64/crypto/aes-cipher-core.S
+++ b/arch/arm64/crypto/aes-cipher-core.S
@@ -125,6 +125,16 @@ CPU_BE( rev w7, w7 )
ret
.endm
+ENTRY(__aes_arm64_encrypt)
+ do_crypt fround, crypto_ft_tab, crypto_ft_tab + 1, 2
+ENDPROC(__aes_arm64_encrypt)
+
+ .align 5
+ENTRY(__aes_arm64_decrypt)
+ do_crypt iround, crypto_it_tab, __aes_arm64_inverse_sbox, 0
+ENDPROC(__aes_arm64_decrypt)
+
+ .section ".rodata", "a"
.align L1_CACHE_SHIFT
.type __aes_arm64_inverse_sbox, %object
__aes_arm64_inverse_sbox:
@@ -161,12 +171,3 @@ __aes_arm64_inverse_sbox:
.byte 0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26
.byte 0xe1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0c, 0x7d
.size __aes_arm64_inverse_sbox, . - __aes_arm64_inverse_sbox
-
-ENTRY(__aes_arm64_encrypt)
- do_crypt fround, crypto_ft_tab, crypto_ft_tab + 1, 2
-ENDPROC(__aes_arm64_encrypt)
-
- .align 5
-ENTRY(__aes_arm64_decrypt)
- do_crypt iround, crypto_it_tab, __aes_arm64_inverse_sbox, 0
-ENDPROC(__aes_arm64_decrypt)
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 3/7] arm64/crypto: aes-neon: move literal data to .rodata section
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-10 12:11 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: mark.rutland, steve.capper, herbert, Ard Biesheuvel,
marc.zyngier, catalin.marinas, will.deacon, dann.frazier
Move the S-boxes and some other literals to the .rodata section where
it is safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/aes-neon.S | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/crypto/aes-neon.S b/arch/arm64/crypto/aes-neon.S
index f1e3aa2732f9..1c7b45b7268e 100644
--- a/arch/arm64/crypto/aes-neon.S
+++ b/arch/arm64/crypto/aes-neon.S
@@ -32,10 +32,10 @@
/* preload the entire Sbox */
.macro prepare, sbox, shiftrows, temp
- adr \temp, \sbox
movi v12.16b, #0x1b
- ldr q13, \shiftrows
- ldr q14, .Lror32by8
+ ldr_l q13, \shiftrows, \temp
+ ldr_l q14, .Lror32by8, \temp
+ adr_l \temp, \sbox
ld1 {v16.16b-v19.16b}, [\temp], #64
ld1 {v20.16b-v23.16b}, [\temp], #64
ld1 {v24.16b-v27.16b}, [\temp], #64
@@ -272,7 +272,7 @@
#include "aes-modes.S"
- .text
+ .section ".rodata", "a"
.align 6
.LForward_Sbox:
.byte 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 3/7] arm64/crypto: aes-neon: move literal data to .rodata section
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Move the S-boxes and some other literals to the .rodata section where
it is safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/aes-neon.S | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/crypto/aes-neon.S b/arch/arm64/crypto/aes-neon.S
index f1e3aa2732f9..1c7b45b7268e 100644
--- a/arch/arm64/crypto/aes-neon.S
+++ b/arch/arm64/crypto/aes-neon.S
@@ -32,10 +32,10 @@
/* preload the entire Sbox */
.macro prepare, sbox, shiftrows, temp
- adr \temp, \sbox
movi v12.16b, #0x1b
- ldr q13, \shiftrows
- ldr q14, .Lror32by8
+ ldr_l q13, \shiftrows, \temp
+ ldr_l q14, .Lror32by8, \temp
+ adr_l \temp, \sbox
ld1 {v16.16b-v19.16b}, [\temp], #64
ld1 {v20.16b-v23.16b}, [\temp], #64
ld1 {v24.16b-v27.16b}, [\temp], #64
@@ -272,7 +272,7 @@
#include "aes-modes.S"
- .text
+ .section ".rodata", "a"
.align 6
.LForward_Sbox:
.byte 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 4/7] arm64/crypto: crc32: move literal data to .rodata section
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-10 12:11 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: herbert, will.deacon, catalin.marinas, marc.zyngier,
mark.rutland, dann.frazier, steve.capper, Ard Biesheuvel
Move CRC32 literal data to the .rodata section where it is safe from
being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/crc32-ce-core.S | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/crypto/crc32-ce-core.S b/arch/arm64/crypto/crc32-ce-core.S
index 18f5a8442276..16ed3c7ebd37 100644
--- a/arch/arm64/crypto/crc32-ce-core.S
+++ b/arch/arm64/crypto/crc32-ce-core.S
@@ -50,7 +50,7 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+ .section ".rodata", "a"
.align 6
.cpu generic+crypto+crc
@@ -115,12 +115,13 @@
* uint crc32_pmull_le(unsigned char const *buffer,
* size_t len, uint crc32)
*/
+ .text
ENTRY(crc32_pmull_le)
- adr x3, .Lcrc32_constants
+ adr_l x3, .Lcrc32_constants
b 0f
ENTRY(crc32c_pmull_le)
- adr x3, .Lcrc32c_constants
+ adr_l x3, .Lcrc32c_constants
0: bic LEN, LEN, #15
ld1 {v1.16b-v4.16b}, [BUF], #0x40
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 4/7] arm64/crypto: crc32: move literal data to .rodata section
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Move CRC32 literal data to the .rodata section where it is safe from
being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/crc32-ce-core.S | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/crypto/crc32-ce-core.S b/arch/arm64/crypto/crc32-ce-core.S
index 18f5a8442276..16ed3c7ebd37 100644
--- a/arch/arm64/crypto/crc32-ce-core.S
+++ b/arch/arm64/crypto/crc32-ce-core.S
@@ -50,7 +50,7 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+ .section ".rodata", "a"
.align 6
.cpu generic+crypto+crc
@@ -115,12 +115,13 @@
* uint crc32_pmull_le(unsigned char const *buffer,
* size_t len, uint crc32)
*/
+ .text
ENTRY(crc32_pmull_le)
- adr x3, .Lcrc32_constants
+ adr_l x3, .Lcrc32_constants
b 0f
ENTRY(crc32c_pmull_le)
- adr x3, .Lcrc32c_constants
+ adr_l x3, .Lcrc32c_constants
0: bic LEN, LEN, #15
ld1 {v1.16b-v4.16b}, [BUF], #0x40
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 5/7] arm64/crypto: crct10dif: move literal data to .rodata section
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-10 12:11 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: herbert, will.deacon, catalin.marinas, marc.zyngier,
mark.rutland, dann.frazier, steve.capper, Ard Biesheuvel
Move the CRC-T10DIF literal data to the .rodata section where it is
safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/crct10dif-ce-core.S | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/crypto/crct10dif-ce-core.S b/arch/arm64/crypto/crct10dif-ce-core.S
index d5b5a8c038c8..f179c01bd55c 100644
--- a/arch/arm64/crypto/crct10dif-ce-core.S
+++ b/arch/arm64/crypto/crct10dif-ce-core.S
@@ -128,7 +128,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 )
// XOR the initial_crc value
eor v0.16b, v0.16b, v10.16b
- ldr q10, rk3 // xmm10 has rk3 and rk4
+ ldr_l q10, rk3, x8 // xmm10 has rk3 and rk4
// type of pmull instruction
// will determine which constant to use
@@ -184,13 +184,13 @@ CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 )
// fold the 8 vector registers to 1 vector register with different
// constants
- ldr q10, rk9
+ ldr_l q10, rk9, x8
.macro fold16, reg, rk
pmull v8.1q, \reg\().1d, v10.1d
pmull2 \reg\().1q, \reg\().2d, v10.2d
.ifnb \rk
- ldr q10, \rk
+ ldr_l q10, \rk, x8
.endif
eor v7.16b, v7.16b, v8.16b
eor v7.16b, v7.16b, \reg\().16b
@@ -251,7 +251,7 @@ CPU_LE( ext v1.16b, v1.16b, v1.16b, #8 )
// get rid of the extra data that was loaded before
// load the shift constant
- adr x4, tbl_shf_table + 16
+ adr_l x4, tbl_shf_table + 16
sub x4, x4, arg3
ld1 {v0.16b}, [x4]
@@ -275,7 +275,7 @@ CPU_LE( ext v1.16b, v1.16b, v1.16b, #8 )
_128_done:
// compute crc of a 128-bit value
- ldr q10, rk5 // rk5 and rk6 in xmm10
+ ldr_l q10, rk5, x8 // rk5 and rk6 in xmm10
// 64b fold
ext v0.16b, vzr.16b, v7.16b, #8
@@ -291,7 +291,7 @@ _128_done:
// barrett reduction
_barrett:
- ldr q10, rk7
+ ldr_l q10, rk7, x8
mov v0.d[0], v7.d[1]
pmull v0.1q, v0.1d, v10.1d
@@ -321,7 +321,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 )
b.eq _128_done // exactly 16 left
b.lt _less_than_16_left
- ldr q10, rk1 // rk1 and rk2 in xmm10
+ ldr_l q10, rk1, x8 // rk1 and rk2 in xmm10
// update the counter. subtract 32 instead of 16 to save one
// instruction from the loop
@@ -333,7 +333,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 )
_less_than_16_left:
// shl r9, 4
- adr x0, tbl_shf_table + 16
+ adr_l x0, tbl_shf_table + 16
sub x0, x0, arg3
ld1 {v0.16b}, [x0]
movi v9.16b, #0x80
@@ -345,6 +345,7 @@ ENDPROC(crc_t10dif_pmull)
// precomputed constants
// these constants are precomputed from the poly:
// 0x8bb70000 (0x8bb7 scaled to 32 bits)
+ .section ".rodata", "a"
.align 4
// Q = 0x18BB70000
// rk1 = 2^(32*3) mod Q << 32
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 5/7] arm64/crypto: crct10dif: move literal data to .rodata section
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Move the CRC-T10DIF literal data to the .rodata section where it is
safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/crct10dif-ce-core.S | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/crypto/crct10dif-ce-core.S b/arch/arm64/crypto/crct10dif-ce-core.S
index d5b5a8c038c8..f179c01bd55c 100644
--- a/arch/arm64/crypto/crct10dif-ce-core.S
+++ b/arch/arm64/crypto/crct10dif-ce-core.S
@@ -128,7 +128,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 )
// XOR the initial_crc value
eor v0.16b, v0.16b, v10.16b
- ldr q10, rk3 // xmm10 has rk3 and rk4
+ ldr_l q10, rk3, x8 // xmm10 has rk3 and rk4
// type of pmull instruction
// will determine which constant to use
@@ -184,13 +184,13 @@ CPU_LE( ext v12.16b, v12.16b, v12.16b, #8 )
// fold the 8 vector registers to 1 vector register with different
// constants
- ldr q10, rk9
+ ldr_l q10, rk9, x8
.macro fold16, reg, rk
pmull v8.1q, \reg\().1d, v10.1d
pmull2 \reg\().1q, \reg\().2d, v10.2d
.ifnb \rk
- ldr q10, \rk
+ ldr_l q10, \rk, x8
.endif
eor v7.16b, v7.16b, v8.16b
eor v7.16b, v7.16b, \reg\().16b
@@ -251,7 +251,7 @@ CPU_LE( ext v1.16b, v1.16b, v1.16b, #8 )
// get rid of the extra data that was loaded before
// load the shift constant
- adr x4, tbl_shf_table + 16
+ adr_l x4, tbl_shf_table + 16
sub x4, x4, arg3
ld1 {v0.16b}, [x4]
@@ -275,7 +275,7 @@ CPU_LE( ext v1.16b, v1.16b, v1.16b, #8 )
_128_done:
// compute crc of a 128-bit value
- ldr q10, rk5 // rk5 and rk6 in xmm10
+ ldr_l q10, rk5, x8 // rk5 and rk6 in xmm10
// 64b fold
ext v0.16b, vzr.16b, v7.16b, #8
@@ -291,7 +291,7 @@ _128_done:
// barrett reduction
_barrett:
- ldr q10, rk7
+ ldr_l q10, rk7, x8
mov v0.d[0], v7.d[1]
pmull v0.1q, v0.1d, v10.1d
@@ -321,7 +321,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 )
b.eq _128_done // exactly 16 left
b.lt _less_than_16_left
- ldr q10, rk1 // rk1 and rk2 in xmm10
+ ldr_l q10, rk1, x8 // rk1 and rk2 in xmm10
// update the counter. subtract 32 instead of 16 to save one
// instruction from the loop
@@ -333,7 +333,7 @@ CPU_LE( ext v7.16b, v7.16b, v7.16b, #8 )
_less_than_16_left:
// shl r9, 4
- adr x0, tbl_shf_table + 16
+ adr_l x0, tbl_shf_table + 16
sub x0, x0, arg3
ld1 {v0.16b}, [x0]
movi v9.16b, #0x80
@@ -345,6 +345,7 @@ ENDPROC(crc_t10dif_pmull)
// precomputed constants
// these constants are precomputed from the poly:
// 0x8bb70000 (0x8bb7 scaled to 32 bits)
+ .section ".rodata", "a"
.align 4
// Q = 0x18BB70000
// rk1 = 2^(32*3) mod Q << 32
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 6/7] arm64/crypto: sha2-ce: move the round constant table to .rodata section
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-10 12:11 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: herbert, will.deacon, catalin.marinas, marc.zyngier,
mark.rutland, dann.frazier, steve.capper, Ard Biesheuvel
Move the SHA2 round constant table to the .rodata section where it is
safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/sha2-ce-core.S | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
index 679c6c002f4f..4c3c89b812ce 100644
--- a/arch/arm64/crypto/sha2-ce-core.S
+++ b/arch/arm64/crypto/sha2-ce-core.S
@@ -53,6 +53,7 @@
/*
* The SHA-256 round constants
*/
+ .section ".rodata", "a"
.align 4
.Lsha2_rcon:
.word 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5
@@ -76,9 +77,10 @@
* void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
* int blocks)
*/
+ .text
ENTRY(sha2_ce_transform)
/* load round constants */
- adr x8, .Lsha2_rcon
+ adr_l x8, .Lsha2_rcon
ld1 { v0.4s- v3.4s}, [x8], #64
ld1 { v4.4s- v7.4s}, [x8], #64
ld1 { v8.4s-v11.4s}, [x8], #64
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 6/7] arm64/crypto: sha2-ce: move the round constant table to .rodata section
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Move the SHA2 round constant table to the .rodata section where it is
safe from being exploited by speculative execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/sha2-ce-core.S | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S
index 679c6c002f4f..4c3c89b812ce 100644
--- a/arch/arm64/crypto/sha2-ce-core.S
+++ b/arch/arm64/crypto/sha2-ce-core.S
@@ -53,6 +53,7 @@
/*
* The SHA-256 round constants
*/
+ .section ".rodata", "a"
.align 4
.Lsha2_rcon:
.word 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5
@@ -76,9 +77,10 @@
* void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
* int blocks)
*/
+ .text
ENTRY(sha2_ce_transform)
/* load round constants */
- adr x8, .Lsha2_rcon
+ adr_l x8, .Lsha2_rcon
ld1 { v0.4s- v3.4s}, [x8], #64
ld1 { v4.4s- v7.4s}, [x8], #64
ld1 { v8.4s-v11.4s}, [x8], #64
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 7/7] arm64/crypto: sha1-ce: get rid of literal pool
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-10 12:11 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel, linux-crypto
Cc: herbert, will.deacon, catalin.marinas, marc.zyngier,
mark.rutland, dann.frazier, steve.capper, Ard Biesheuvel
Load the four SHA-1 round constants using immediates rather than literal
pool entries, to avoid having executable data that may be exploitable
under speculation attacks.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/sha1-ce-core.S | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S
index 8550408735a0..46049850727d 100644
--- a/arch/arm64/crypto/sha1-ce-core.S
+++ b/arch/arm64/crypto/sha1-ce-core.S
@@ -58,12 +58,11 @@
sha1su1 v\s0\().4s, v\s3\().4s
.endm
- /*
- * The SHA1 round constants
- */
- .align 4
-.Lsha1_rcon:
- .word 0x5a827999, 0x6ed9eba1, 0x8f1bbcdc, 0xca62c1d6
+ .macro loadrc, k, val, tmp
+ movz \tmp, :abs_g0_nc:\val
+ movk \tmp, :abs_g1:\val
+ dup \k, \tmp
+ .endm
/*
* void sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
@@ -71,11 +70,10 @@
*/
ENTRY(sha1_ce_transform)
/* load round constants */
- adr x6, .Lsha1_rcon
- ld1r {k0.4s}, [x6], #4
- ld1r {k1.4s}, [x6], #4
- ld1r {k2.4s}, [x6], #4
- ld1r {k3.4s}, [x6]
+ loadrc k0.4s, 0x5a827999, w6
+ loadrc k1.4s, 0x6ed9eba1, w6
+ loadrc k2.4s, 0x8f1bbcdc, w6
+ loadrc k3.4s, 0xca62c1d6, w6
/* load state */
ld1 {dgav.4s}, [x0]
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 7/7] arm64/crypto: sha1-ce: get rid of literal pool
@ 2018-01-10 12:11 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-10 12:11 UTC (permalink / raw)
To: linux-arm-kernel
Load the four SHA-1 round constants using immediates rather than literal
pool entries, to avoid having executable data that may be exploitable
under speculation attacks.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
arch/arm64/crypto/sha1-ce-core.S | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/crypto/sha1-ce-core.S b/arch/arm64/crypto/sha1-ce-core.S
index 8550408735a0..46049850727d 100644
--- a/arch/arm64/crypto/sha1-ce-core.S
+++ b/arch/arm64/crypto/sha1-ce-core.S
@@ -58,12 +58,11 @@
sha1su1 v\s0\().4s, v\s3\().4s
.endm
- /*
- * The SHA1 round constants
- */
- .align 4
-.Lsha1_rcon:
- .word 0x5a827999, 0x6ed9eba1, 0x8f1bbcdc, 0xca62c1d6
+ .macro loadrc, k, val, tmp
+ movz \tmp, :abs_g0_nc:\val
+ movk \tmp, :abs_g1:\val
+ dup \k, \tmp
+ .endm
/*
* void sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
@@ -71,11 +70,10 @@
*/
ENTRY(sha1_ce_transform)
/* load round constants */
- adr x6, .Lsha1_rcon
- ld1r {k0.4s}, [x6], #4
- ld1r {k1.4s}, [x6], #4
- ld1r {k2.4s}, [x6], #4
- ld1r {k3.4s}, [x6]
+ loadrc k0.4s, 0x5a827999, w6
+ loadrc k1.4s, 0x6ed9eba1, w6
+ loadrc k2.4s, 0x8f1bbcdc, w6
+ loadrc k3.4s, 0xca62c1d6, w6
/* load state */
ld1 {dgav.4s}, [x0]
--
2.11.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH 1/7] arm64: kernel: avoid executable literal pools
2018-01-10 12:11 ` Ard Biesheuvel
(?)
@ 2018-01-14 23:27 ` Ard Biesheuvel
2018-01-14 23:29 ` Ard Biesheuvel
-1 siblings, 1 reply; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-14 23:27 UTC (permalink / raw)
To: linux-arm-kernel
On 10 January 2018 at 12:11, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> Recent versions of GCC will emit literals into a separate .rodata section
> rather than interspersed with the instruction stream. We disabled this
> in commit 67dfa1751ce71 ("arm64: errata: Add -mpc-relative-literal-loads
> to build flags"), because it uses adrp/add pairs to reference these
> literals even when building with -mcmodel=large, which breaks module
> loading when we have the mitigation for Cortex-A53 erratum #843419
> enabled.
>
> However, due to the recent discoveries regarding speculative execution,
> we should avoid putting data into executable sections, to prevent
> creating speculative gadgets inadvertently.
>
> So set -mpc-relative-literal-loads only for modules, and only if the
> A53 erratum is enabled.
>
This appears not to help: even with the command line option removed,
the literals are still emitted into the .text section, even though the
references are emitted using adrp/ldr pairs. AFAICT, the reason for
this feature was very large functions (>1 MB), even though I am pretty
sure I discussed the ROP gadget use case with Ramana at some point.
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> arch/arm64/Makefile | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index b481b4a7c011..bd7cb205e28a 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -26,7 +26,8 @@ ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
> ifeq ($(call ld-option, --fix-cortex-a53-843419),)
> $(warning ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum)
> else
> -LDFLAGS_vmlinux += --fix-cortex-a53-843419
> +LDFLAGS_vmlinux += --fix-cortex-a53-843419
> +KBUILD_CFLAGS_MODULE += $(call cc-option, -mpc-relative-literal-loads)
> endif
> endif
>
> @@ -51,7 +52,6 @@ endif
>
> KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst)
> KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
> -KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads)
> KBUILD_AFLAGS += $(lseinstr) $(brokengasinst)
>
> KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
> --
> 2.11.0
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 1/7] arm64: kernel: avoid executable literal pools
2018-01-14 23:27 ` Ard Biesheuvel
@ 2018-01-14 23:29 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-14 23:29 UTC (permalink / raw)
To: linux-arm-kernel
On 14 January 2018 at 23:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 10 January 2018 at 12:11, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> Recent versions of GCC will emit literals into a separate .rodata section
>> rather than interspersed with the instruction stream. We disabled this
>> in commit 67dfa1751ce71 ("arm64: errata: Add -mpc-relative-literal-loads
>> to build flags"), because it uses adrp/add pairs to reference these
>> literals even when building with -mcmodel=large, which breaks module
>> loading when we have the mitigation for Cortex-A53 erratum #843419
>> enabled.
>>
>> However, due to the recent discoveries regarding speculative execution,
>> we should avoid putting data into executable sections, to prevent
>> creating speculative gadgets inadvertently.
>>
>> So set -mpc-relative-literal-loads only for modules, and only if the
>> A53 erratum is enabled.
>>
>
> This appears not to help: even with the command line option removed,
> the literals are still emitted into the .text section, even though the
> references are emitted using adrp/ldr pairs. AFAICT, the reason for
> this feature was very large functions (>1 MB), even though I am pretty
> sure I discussed the ROP gadget use case with Ramana at some point.
>
Ehm, apologies, right reply but to wrong patch.
*This* patch is pointless because vmlinux is built using the small
model so whether we apply the GGC option to everything or to modules
only does not make any difference.
In summary, it seems the only way to get rid of literals in the .text
section entirely is by dropping the large C model entirely.
>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>> arch/arm64/Makefile | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
>> index b481b4a7c011..bd7cb205e28a 100644
>> --- a/arch/arm64/Makefile
>> +++ b/arch/arm64/Makefile
>> @@ -26,7 +26,8 @@ ifeq ($(CONFIG_ARM64_ERRATUM_843419),y)
>> ifeq ($(call ld-option, --fix-cortex-a53-843419),)
>> $(warning ld does not support --fix-cortex-a53-843419; kernel may be susceptible to erratum)
>> else
>> -LDFLAGS_vmlinux += --fix-cortex-a53-843419
>> +LDFLAGS_vmlinux += --fix-cortex-a53-843419
>> +KBUILD_CFLAGS_MODULE += $(call cc-option, -mpc-relative-literal-loads)
>> endif
>> endif
>>
>> @@ -51,7 +52,6 @@ endif
>>
>> KBUILD_CFLAGS += -mgeneral-regs-only $(lseinstr) $(brokengasinst)
>> KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
>> -KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads)
>> KBUILD_AFLAGS += $(lseinstr) $(brokengasinst)
>>
>> KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
>> --
>> 2.11.0
>>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 0/7] arm64: move literal data into .rodata section
2018-01-10 12:11 ` Ard Biesheuvel
@ 2018-01-18 11:41 ` Herbert Xu
-1 siblings, 0 replies; 24+ messages in thread
From: Herbert Xu @ 2018-01-18 11:41 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-arm-kernel, linux-crypto, will.deacon, catalin.marinas,
marc.zyngier, mark.rutland, dann.frazier, steve.capper
On Wed, Jan 10, 2018 at 12:11:35PM +0000, Ard Biesheuvel wrote:
> Prevent inadvertently creating speculative gadgets by moving literal data
> into the .rodata section.
>
> Patch #1 enables this for C code, by reverting a change that disables the
> GCC feature implementing this. Note that this conflicts with the mitigation
> of erratum #843419 for Cortex-A53.
Ard, which tree is this supposed to go through?
Thanks,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 0/7] arm64: move literal data into .rodata section
@ 2018-01-18 11:41 ` Herbert Xu
0 siblings, 0 replies; 24+ messages in thread
From: Herbert Xu @ 2018-01-18 11:41 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Jan 10, 2018 at 12:11:35PM +0000, Ard Biesheuvel wrote:
> Prevent inadvertently creating speculative gadgets by moving literal data
> into the .rodata section.
>
> Patch #1 enables this for C code, by reverting a change that disables the
> GCC feature implementing this. Note that this conflicts with the mitigation
> of erratum #843419 for Cortex-A53.
Ard, which tree is this supposed to go through?
Thanks,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 0/7] arm64: move literal data into .rodata section
2018-01-18 11:41 ` Herbert Xu
@ 2018-01-18 11:46 ` Ard Biesheuvel
-1 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-18 11:46 UTC (permalink / raw)
To: Herbert Xu
Cc: linux-arm-kernel,
open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Will Deacon,
Catalin Marinas, Marc Zyngier, Mark Rutland, Dann Frazier,
Steve Capper
On 18 January 2018 at 11:41, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Wed, Jan 10, 2018 at 12:11:35PM +0000, Ard Biesheuvel wrote:
>> Prevent inadvertently creating speculative gadgets by moving literal data
>> into the .rodata section.
>>
>> Patch #1 enables this for C code, by reverting a change that disables the
>> GCC feature implementing this. Note that this conflicts with the mitigation
>> of erratum #843419 for Cortex-A53.
>
> Ard, which tree is this supposed to go through?
>
Hi Herbert,
I am going to drop that first patch, the remaining 6 patches can go
through the crypto tree as they are independent.
Thanks,
Ard.
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 0/7] arm64: move literal data into .rodata section
@ 2018-01-18 11:46 ` Ard Biesheuvel
0 siblings, 0 replies; 24+ messages in thread
From: Ard Biesheuvel @ 2018-01-18 11:46 UTC (permalink / raw)
To: linux-arm-kernel
On 18 January 2018 at 11:41, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> On Wed, Jan 10, 2018 at 12:11:35PM +0000, Ard Biesheuvel wrote:
>> Prevent inadvertently creating speculative gadgets by moving literal data
>> into the .rodata section.
>>
>> Patch #1 enables this for C code, by reverting a change that disables the
>> GCC feature implementing this. Note that this conflicts with the mitigation
>> of erratum #843419 for Cortex-A53.
>
> Ard, which tree is this supposed to go through?
>
Hi Herbert,
I am going to drop that first patch, the remaining 6 patches can go
through the crypto tree as they are independent.
Thanks,
Ard.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH 0/7] arm64: move literal data into .rodata section
2018-01-18 11:46 ` Ard Biesheuvel
@ 2018-01-18 12:02 ` Herbert Xu
-1 siblings, 0 replies; 24+ messages in thread
From: Herbert Xu @ 2018-01-18 12:02 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-arm-kernel,
open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Will Deacon,
Catalin Marinas, Marc Zyngier, Mark Rutland, Dann Frazier,
Steve Capper
On Thu, Jan 18, 2018 at 11:46:07AM +0000, Ard Biesheuvel wrote:
> On 18 January 2018 at 11:41, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> > On Wed, Jan 10, 2018 at 12:11:35PM +0000, Ard Biesheuvel wrote:
> >> Prevent inadvertently creating speculative gadgets by moving literal data
> >> into the .rodata section.
> >>
> >> Patch #1 enables this for C code, by reverting a change that disables the
> >> GCC feature implementing this. Note that this conflicts with the mitigation
> >> of erratum #843419 for Cortex-A53.
> >
> > Ard, which tree is this supposed to go through?
> >
>
> Hi Herbert,
>
> I am going to drop that first patch, the remaining 6 patches can go
> through the crypto tree as they are independent.
Patches 2-7 applied. Thanks.
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH 0/7] arm64: move literal data into .rodata section
@ 2018-01-18 12:02 ` Herbert Xu
0 siblings, 0 replies; 24+ messages in thread
From: Herbert Xu @ 2018-01-18 12:02 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jan 18, 2018 at 11:46:07AM +0000, Ard Biesheuvel wrote:
> On 18 January 2018 at 11:41, Herbert Xu <herbert@gondor.apana.org.au> wrote:
> > On Wed, Jan 10, 2018 at 12:11:35PM +0000, Ard Biesheuvel wrote:
> >> Prevent inadvertently creating speculative gadgets by moving literal data
> >> into the .rodata section.
> >>
> >> Patch #1 enables this for C code, by reverting a change that disables the
> >> GCC feature implementing this. Note that this conflicts with the mitigation
> >> of erratum #843419 for Cortex-A53.
> >
> > Ard, which tree is this supposed to go through?
> >
>
> Hi Herbert,
>
> I am going to drop that first patch, the remaining 6 patches can go
> through the crypto tree as they are independent.
Patches 2-7 applied. Thanks.
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2018-01-18 12:03 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-10 12:11 [PATCH 0/7] arm64: move literal data into .rodata section Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-10 12:11 ` [PATCH 1/7] arm64: kernel: avoid executable literal pools Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-14 23:27 ` Ard Biesheuvel
2018-01-14 23:29 ` Ard Biesheuvel
2018-01-10 12:11 ` [PATCH 2/7] arm64/crypto: aes-cipher: move S-box to .rodata section Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-10 12:11 ` [PATCH 3/7] arm64/crypto: aes-neon: move literal data " Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-10 12:11 ` [PATCH 4/7] arm64/crypto: crc32: " Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-10 12:11 ` [PATCH 5/7] arm64/crypto: crct10dif: " Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-10 12:11 ` [PATCH 6/7] arm64/crypto: sha2-ce: move the round constant table " Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-10 12:11 ` [PATCH 7/7] arm64/crypto: sha1-ce: get rid of literal pool Ard Biesheuvel
2018-01-10 12:11 ` Ard Biesheuvel
2018-01-18 11:41 ` [PATCH 0/7] arm64: move literal data into .rodata section Herbert Xu
2018-01-18 11:41 ` Herbert Xu
2018-01-18 11:46 ` Ard Biesheuvel
2018-01-18 11:46 ` Ard Biesheuvel
2018-01-18 12:02 ` Herbert Xu
2018-01-18 12:02 ` Herbert Xu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.