* [PATCH 0/3] crypto: use unaligned accessors in aligned fast paths @ 2018-10-08 21:15 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-crypto Cc: jason, herbert, arnd, Ard Biesheuvel, ebiggers, linux-arm-kernel CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS behaves a bit counterintuitively on ARM: we set it for architecture revisions v6 and up, which support any alignment for load/store instructions that operate on bytes, half words or words. However, load/store double word and load store multiple instructions still require 32-bit alignment, and using them on unaligned quantities results in costly alignment traps that have to be fixed up by the kernel's fixup code. Fortunately, the unaligned accessors do the right thing here: on architectures that really tolerate any misalignment, they simply resolve to the aligned accessors, while on ARMv6+ (which uses the packed struct wrappers for unaligned accesses), they result in load/store sequences that avoid the instructions that require 32-bit alignment. Since there is not really a downside to using the unaligned accessors on aligned paths for architectures other than ARM that define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS, let's switch to them in a couple of places in the crypto code. Note that all patches are against code that has been observed to be emitted with ldm or ldrd instructions when building ARM's multi_v7_defconfig. Ard Biesheuvel (3): crypto: memneq - use unaligned accessors for aligned fast path crypto: crypto_xor - use unaligned accessors for aligned fast path crypto: siphash - drop _aligned variants crypto/algapi.c | 7 +- crypto/memneq.c | 24 +++-- include/crypto/algapi.h | 11 +- include/linux/siphash.h | 106 +++++++++----------- lib/siphash.c | 103 ++----------------- 5 files changed, 83 insertions(+), 168 deletions(-) -- 2.11.0 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 0/3] crypto: use unaligned accessors in aligned fast paths @ 2018-10-08 21:15 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-arm-kernel CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS behaves a bit counterintuitively on ARM: we set it for architecture revisions v6 and up, which support any alignment for load/store instructions that operate on bytes, half words or words. However, load/store double word and load store multiple instructions still require 32-bit alignment, and using them on unaligned quantities results in costly alignment traps that have to be fixed up by the kernel's fixup code. Fortunately, the unaligned accessors do the right thing here: on architectures that really tolerate any misalignment, they simply resolve to the aligned accessors, while on ARMv6+ (which uses the packed struct wrappers for unaligned accesses), they result in load/store sequences that avoid the instructions that require 32-bit alignment. Since there is not really a downside to using the unaligned accessors on aligned paths for architectures other than ARM that define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS, let's switch to them in a couple of places in the crypto code. Note that all patches are against code that has been observed to be emitted with ldm or ldrd instructions when building ARM's multi_v7_defconfig. Ard Biesheuvel (3): crypto: memneq - use unaligned accessors for aligned fast path crypto: crypto_xor - use unaligned accessors for aligned fast path crypto: siphash - drop _aligned variants crypto/algapi.c | 7 +- crypto/memneq.c | 24 +++-- include/crypto/algapi.h | 11 +- include/linux/siphash.h | 106 +++++++++----------- lib/siphash.c | 103 ++----------------- 5 files changed, 83 insertions(+), 168 deletions(-) -- 2.11.0 ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path 2018-10-08 21:15 ` Ard Biesheuvel @ 2018-10-08 21:15 ` Ard Biesheuvel -1 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-crypto Cc: jason, herbert, arnd, Ard Biesheuvel, ebiggers, linux-arm-kernel On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. So switch to the unaligned accessors for the aligned fast path. This will create the exact same code on architectures that can really tolerate any kind of misalignment, and generate code for ARMv6+ that avoids load/store instructions that trigger alignment faults. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- crypto/memneq.c | 24 ++++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/crypto/memneq.c b/crypto/memneq.c index afed1bd16aee..0f46a6150f22 100644 --- a/crypto/memneq.c +++ b/crypto/memneq.c @@ -60,6 +60,7 @@ */ #include <crypto/algapi.h> +#include <asm/unaligned.h> #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ @@ -71,7 +72,10 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) while (size >= sizeof(unsigned long)) { - neq |= *(unsigned long *)a ^ *(unsigned long *)b; + unsigned long const *p = a; + unsigned long const *q = b; + + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); a += sizeof(unsigned long); b += sizeof(unsigned long); @@ -95,18 +99,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (sizeof(unsigned long) == 8) { - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); + unsigned long const *p = a; + unsigned long const *q = b; + + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); } else if (sizeof(unsigned int) == 4) { - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); + unsigned int const *p = a; + unsigned int const *q = b; + + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); } else #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ -- 2.11.0 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path @ 2018-10-08 21:15 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-arm-kernel On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. So switch to the unaligned accessors for the aligned fast path. This will create the exact same code on architectures that can really tolerate any kind of misalignment, and generate code for ARMv6+ that avoids load/store instructions that trigger alignment faults. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- crypto/memneq.c | 24 ++++++++++++++------ 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/crypto/memneq.c b/crypto/memneq.c index afed1bd16aee..0f46a6150f22 100644 --- a/crypto/memneq.c +++ b/crypto/memneq.c @@ -60,6 +60,7 @@ */ #include <crypto/algapi.h> +#include <asm/unaligned.h> #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ @@ -71,7 +72,10 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) while (size >= sizeof(unsigned long)) { - neq |= *(unsigned long *)a ^ *(unsigned long *)b; + unsigned long const *p = a; + unsigned long const *q = b; + + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); a += sizeof(unsigned long); b += sizeof(unsigned long); @@ -95,18 +99,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (sizeof(unsigned long) == 8) { - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); + unsigned long const *p = a; + unsigned long const *q = b; + + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); } else if (sizeof(unsigned int) == 4) { - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); + unsigned int const *p = a; + unsigned int const *q = b; + + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); + neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); + neq |= get_unaligned(p) ^ get_unaligned(q); OPTIMIZER_HIDE_VAR(neq); } else #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ -- 2.11.0 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path 2018-10-08 21:15 ` Ard Biesheuvel @ 2018-10-09 3:34 ` Eric Biggers -1 siblings, 0 replies; 23+ messages in thread From: Eric Biggers @ 2018-10-09 3:34 UTC (permalink / raw) To: Ard Biesheuvel; +Cc: linux-arm-kernel, jason, linux-crypto, arnd, herbert Hi Ard, On Mon, Oct 08, 2018 at 11:15:52PM +0200, Ard Biesheuvel wrote: > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and load/store multiple instructions (ldrd, ldm) may still only > be used on memory addresses that are 32-bit aligned, and so we have to > use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we > may end up with a severe performance hit due to alignment traps that > require fixups by the kernel. > > Fortunately, the get_unaligned() accessors do the right thing: when > building for ARMv6 or later, the compiler will emit unaligned accesses > using the ordinary load/store instructions (but avoid the ones that > require 32-bit alignment). When building for older ARM, those accessors > will emit the appropriate sequence of ldrb/mov/orr instructions. And on > architectures that can truly tolerate any kind of misalignment, the > get_unaligned() accessors resolve to the leXX_to_cpup accessors that > operate on aligned addresses. > > So switch to the unaligned accessors for the aligned fast path. This > will create the exact same code on architectures that can really > tolerate any kind of misalignment, and generate code for ARMv6+ that > avoids load/store instructions that trigger alignment faults. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > --- > crypto/memneq.c | 24 ++++++++++++++------ > 1 file changed, 17 insertions(+), 7 deletions(-) > > diff --git a/crypto/memneq.c b/crypto/memneq.c > index afed1bd16aee..0f46a6150f22 100644 > --- a/crypto/memneq.c > +++ b/crypto/memneq.c > @@ -60,6 +60,7 @@ > */ > > #include <crypto/algapi.h> > +#include <asm/unaligned.h> > > #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ > > @@ -71,7 +72,10 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) > > #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) > while (size >= sizeof(unsigned long)) { > - neq |= *(unsigned long *)a ^ *(unsigned long *)b; > + unsigned long const *p = a; > + unsigned long const *q = b; > + > + neq |= get_unaligned(p) ^ get_unaligned(q); > OPTIMIZER_HIDE_VAR(neq); > a += sizeof(unsigned long); > b += sizeof(unsigned long); > @@ -95,18 +99,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) > > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > if (sizeof(unsigned long) == 8) { > - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); > + unsigned long const *p = a; > + unsigned long const *q = b; > + > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); > + neq |= get_unaligned(p) ^ get_unaligned(q); > OPTIMIZER_HIDE_VAR(neq); > } else if (sizeof(unsigned int) == 4) { > - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); > + unsigned int const *p = a; > + unsigned int const *q = b; > + > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); > + neq |= get_unaligned(p) ^ get_unaligned(q); > OPTIMIZER_HIDE_VAR(neq); > } else > #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ This looks good, but maybe now we should get rid of the !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS path too? At least for the 16-byte case: static inline unsigned long __crypto_memneq_16(const void *a, const void *b) { const unsigned long *p = a, *q = b; unsigned long neq = 0; BUILD_BUG_ON(sizeof(*p) != 4 && sizeof(*p) != 8); neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); if (sizeof(*p) == 4) { neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); } return neq; } ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path @ 2018-10-09 3:34 ` Eric Biggers 0 siblings, 0 replies; 23+ messages in thread From: Eric Biggers @ 2018-10-09 3:34 UTC (permalink / raw) To: linux-arm-kernel Hi Ard, On Mon, Oct 08, 2018 at 11:15:52PM +0200, Ard Biesheuvel wrote: > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and load/store multiple instructions (ldrd, ldm) may still only > be used on memory addresses that are 32-bit aligned, and so we have to > use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we > may end up with a severe performance hit due to alignment traps that > require fixups by the kernel. > > Fortunately, the get_unaligned() accessors do the right thing: when > building for ARMv6 or later, the compiler will emit unaligned accesses > using the ordinary load/store instructions (but avoid the ones that > require 32-bit alignment). When building for older ARM, those accessors > will emit the appropriate sequence of ldrb/mov/orr instructions. And on > architectures that can truly tolerate any kind of misalignment, the > get_unaligned() accessors resolve to the leXX_to_cpup accessors that > operate on aligned addresses. > > So switch to the unaligned accessors for the aligned fast path. This > will create the exact same code on architectures that can really > tolerate any kind of misalignment, and generate code for ARMv6+ that > avoids load/store instructions that trigger alignment faults. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > --- > crypto/memneq.c | 24 ++++++++++++++------ > 1 file changed, 17 insertions(+), 7 deletions(-) > > diff --git a/crypto/memneq.c b/crypto/memneq.c > index afed1bd16aee..0f46a6150f22 100644 > --- a/crypto/memneq.c > +++ b/crypto/memneq.c > @@ -60,6 +60,7 @@ > */ > > #include <crypto/algapi.h> > +#include <asm/unaligned.h> > > #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ > > @@ -71,7 +72,10 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) > > #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) > while (size >= sizeof(unsigned long)) { > - neq |= *(unsigned long *)a ^ *(unsigned long *)b; > + unsigned long const *p = a; > + unsigned long const *q = b; > + > + neq |= get_unaligned(p) ^ get_unaligned(q); > OPTIMIZER_HIDE_VAR(neq); > a += sizeof(unsigned long); > b += sizeof(unsigned long); > @@ -95,18 +99,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) > > #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > if (sizeof(unsigned long) == 8) { > - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); > + unsigned long const *p = a; > + unsigned long const *q = b; > + > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); > + neq |= get_unaligned(p) ^ get_unaligned(q); > OPTIMIZER_HIDE_VAR(neq); > } else if (sizeof(unsigned int) == 4) { > - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); > + unsigned int const *p = a; > + unsigned int const *q = b; > + > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); > + neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); > + neq |= get_unaligned(p) ^ get_unaligned(q); > OPTIMIZER_HIDE_VAR(neq); > } else > #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ This looks good, but maybe now we should get rid of the !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS path too? At least for the 16-byte case: static inline unsigned long __crypto_memneq_16(const void *a, const void *b) { const unsigned long *p = a, *q = b; unsigned long neq = 0; BUILD_BUG_ON(sizeof(*p) != 4 && sizeof(*p) != 8); neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); if (sizeof(*p) == 4) { neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); neq |= get_unaligned(p++) ^ get_unaligned(q++); OPTIMIZER_HIDE_VAR(neq); } return neq; } ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path 2018-10-09 3:34 ` Eric Biggers @ 2018-10-09 6:01 ` Ard Biesheuvel -1 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-09 6:01 UTC (permalink / raw) To: Eric Biggers Cc: linux-arm-kernel, Jason A. Donenfeld, open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Arnd Bergmann, Herbert Xu On 9 October 2018 at 05:34, Eric Biggers <ebiggers@kernel.org> wrote: > Hi Ard, > > On Mon, Oct 08, 2018 at 11:15:52PM +0200, Ard Biesheuvel wrote: >> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> because the ordinary load/store instructions (ldr, ldrh, ldrb) can >> tolerate any misalignment of the memory address. However, load/store >> double and load/store multiple instructions (ldrd, ldm) may still only >> be used on memory addresses that are 32-bit aligned, and so we have to >> use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we >> may end up with a severe performance hit due to alignment traps that >> require fixups by the kernel. >> >> Fortunately, the get_unaligned() accessors do the right thing: when >> building for ARMv6 or later, the compiler will emit unaligned accesses >> using the ordinary load/store instructions (but avoid the ones that >> require 32-bit alignment). When building for older ARM, those accessors >> will emit the appropriate sequence of ldrb/mov/orr instructions. And on >> architectures that can truly tolerate any kind of misalignment, the >> get_unaligned() accessors resolve to the leXX_to_cpup accessors that >> operate on aligned addresses. >> >> So switch to the unaligned accessors for the aligned fast path. This >> will create the exact same code on architectures that can really >> tolerate any kind of misalignment, and generate code for ARMv6+ that >> avoids load/store instructions that trigger alignment faults. >> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> --- >> crypto/memneq.c | 24 ++++++++++++++------ >> 1 file changed, 17 insertions(+), 7 deletions(-) >> >> diff --git a/crypto/memneq.c b/crypto/memneq.c >> index afed1bd16aee..0f46a6150f22 100644 >> --- a/crypto/memneq.c >> +++ b/crypto/memneq.c >> @@ -60,6 +60,7 @@ >> */ >> >> #include <crypto/algapi.h> >> +#include <asm/unaligned.h> >> >> #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ >> >> @@ -71,7 +72,10 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) >> >> #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) >> while (size >= sizeof(unsigned long)) { >> - neq |= *(unsigned long *)a ^ *(unsigned long *)b; >> + unsigned long const *p = a; >> + unsigned long const *q = b; >> + >> + neq |= get_unaligned(p) ^ get_unaligned(q); >> OPTIMIZER_HIDE_VAR(neq); >> a += sizeof(unsigned long); >> b += sizeof(unsigned long); >> @@ -95,18 +99,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) >> >> #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> if (sizeof(unsigned long) == 8) { >> - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); >> + unsigned long const *p = a; >> + unsigned long const *q = b; >> + >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); >> + neq |= get_unaligned(p) ^ get_unaligned(q); >> OPTIMIZER_HIDE_VAR(neq); >> } else if (sizeof(unsigned int) == 4) { >> - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); >> + unsigned int const *p = a; >> + unsigned int const *q = b; >> + >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); >> + neq |= get_unaligned(p) ^ get_unaligned(q); >> OPTIMIZER_HIDE_VAR(neq); >> } else >> #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ > > This looks good, but maybe now we should get rid of the > !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS path too? > At least for the 16-byte case: > > static inline unsigned long __crypto_memneq_16(const void *a, const void *b) > { > const unsigned long *p = a, *q = b; > unsigned long neq = 0; > > BUILD_BUG_ON(sizeof(*p) != 4 && sizeof(*p) != 8); > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > if (sizeof(*p) == 4) { > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > } > return neq; > } Yes that makes sense. ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path @ 2018-10-09 6:01 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-09 6:01 UTC (permalink / raw) To: linux-arm-kernel On 9 October 2018 at 05:34, Eric Biggers <ebiggers@kernel.org> wrote: > Hi Ard, > > On Mon, Oct 08, 2018 at 11:15:52PM +0200, Ard Biesheuvel wrote: >> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> because the ordinary load/store instructions (ldr, ldrh, ldrb) can >> tolerate any misalignment of the memory address. However, load/store >> double and load/store multiple instructions (ldrd, ldm) may still only >> be used on memory addresses that are 32-bit aligned, and so we have to >> use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we >> may end up with a severe performance hit due to alignment traps that >> require fixups by the kernel. >> >> Fortunately, the get_unaligned() accessors do the right thing: when >> building for ARMv6 or later, the compiler will emit unaligned accesses >> using the ordinary load/store instructions (but avoid the ones that >> require 32-bit alignment). When building for older ARM, those accessors >> will emit the appropriate sequence of ldrb/mov/orr instructions. And on >> architectures that can truly tolerate any kind of misalignment, the >> get_unaligned() accessors resolve to the leXX_to_cpup accessors that >> operate on aligned addresses. >> >> So switch to the unaligned accessors for the aligned fast path. This >> will create the exact same code on architectures that can really >> tolerate any kind of misalignment, and generate code for ARMv6+ that >> avoids load/store instructions that trigger alignment faults. >> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> --- >> crypto/memneq.c | 24 ++++++++++++++------ >> 1 file changed, 17 insertions(+), 7 deletions(-) >> >> diff --git a/crypto/memneq.c b/crypto/memneq.c >> index afed1bd16aee..0f46a6150f22 100644 >> --- a/crypto/memneq.c >> +++ b/crypto/memneq.c >> @@ -60,6 +60,7 @@ >> */ >> >> #include <crypto/algapi.h> >> +#include <asm/unaligned.h> >> >> #ifndef __HAVE_ARCH_CRYPTO_MEMNEQ >> >> @@ -71,7 +72,10 @@ __crypto_memneq_generic(const void *a, const void *b, size_t size) >> >> #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) >> while (size >= sizeof(unsigned long)) { >> - neq |= *(unsigned long *)a ^ *(unsigned long *)b; >> + unsigned long const *p = a; >> + unsigned long const *q = b; >> + >> + neq |= get_unaligned(p) ^ get_unaligned(q); >> OPTIMIZER_HIDE_VAR(neq); >> a += sizeof(unsigned long); >> b += sizeof(unsigned long); >> @@ -95,18 +99,24 @@ static inline unsigned long __crypto_memneq_16(const void *a, const void *b) >> >> #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> if (sizeof(unsigned long) == 8) { >> - neq |= *(unsigned long *)(a) ^ *(unsigned long *)(b); >> + unsigned long const *p = a; >> + unsigned long const *q = b; >> + >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned long *)(a+8) ^ *(unsigned long *)(b+8); >> + neq |= get_unaligned(p) ^ get_unaligned(q); >> OPTIMIZER_HIDE_VAR(neq); >> } else if (sizeof(unsigned int) == 4) { >> - neq |= *(unsigned int *)(a) ^ *(unsigned int *)(b); >> + unsigned int const *p = a; >> + unsigned int const *q = b; >> + >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned int *)(a+4) ^ *(unsigned int *)(b+4); >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned int *)(a+8) ^ *(unsigned int *)(b+8); >> + neq |= get_unaligned(p++) ^ get_unaligned(q++); >> OPTIMIZER_HIDE_VAR(neq); >> - neq |= *(unsigned int *)(a+12) ^ *(unsigned int *)(b+12); >> + neq |= get_unaligned(p) ^ get_unaligned(q); >> OPTIMIZER_HIDE_VAR(neq); >> } else >> #endif /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */ > > This looks good, but maybe now we should get rid of the > !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS path too? > At least for the 16-byte case: > > static inline unsigned long __crypto_memneq_16(const void *a, const void *b) > { > const unsigned long *p = a, *q = b; > unsigned long neq = 0; > > BUILD_BUG_ON(sizeof(*p) != 4 && sizeof(*p) != 8); > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > if (sizeof(*p) == 4) { > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > neq |= get_unaligned(p++) ^ get_unaligned(q++); > OPTIMIZER_HIDE_VAR(neq); > } > return neq; > } Yes that makes sense. ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path 2018-10-08 21:15 ` Ard Biesheuvel @ 2018-10-08 21:15 ` Ard Biesheuvel -1 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-crypto Cc: jason, herbert, arnd, Ard Biesheuvel, ebiggers, linux-arm-kernel On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. So switch to the unaligned accessors for the aligned fast path. This will create the exact same code on architectures that can really tolerate any kind of misalignment, and generate code for ARMv6+ that avoids load/store instructions that trigger alignment faults. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- crypto/algapi.c | 7 +++---- include/crypto/algapi.h | 11 +++++++++-- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/crypto/algapi.c b/crypto/algapi.c index 2545c5f89c4c..52ce3c5a0499 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -988,11 +988,10 @@ void crypto_inc(u8 *a, unsigned int size) __be32 *b = (__be32 *)(a + size); u32 c; - if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || - IS_ALIGNED((unsigned long)b, __alignof__(*b))) + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) for (; size >= 4; size -= 4) { - c = be32_to_cpu(*--b) + 1; - *b = cpu_to_be32(c); + c = get_unaligned_be32(--b) + 1; + put_unaligned_be32(c, b); if (likely(c)) return; } diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 4a5ad10e75f0..86267c232f34 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -17,6 +17,8 @@ #include <linux/kernel.h> #include <linux/skbuff.h> +#include <asm/unaligned.h> + /* * Maximum values for blocksize and alignmask, used to allocate * static buffers that are big enough for any combination of @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) unsigned long *s = (unsigned long *)src; while (size > 0) { - *d++ ^= *s++; + put_unaligned(get_unaligned(d) ^ get_unaligned(s), d); + d++; + s++; size -= sizeof(unsigned long); } } else { @@ -231,7 +235,10 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, unsigned long *s2 = (unsigned long *)src2; while (size > 0) { - *d++ = *s1++ ^ *s2++; + put_unaligned(get_unaligned(s1) ^ get_unaligned(s2), d); + d++; + s1++; + s2++; size -= sizeof(unsigned long); } } else { -- 2.11.0 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path @ 2018-10-08 21:15 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-arm-kernel On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. So switch to the unaligned accessors for the aligned fast path. This will create the exact same code on architectures that can really tolerate any kind of misalignment, and generate code for ARMv6+ that avoids load/store instructions that trigger alignment faults. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- crypto/algapi.c | 7 +++---- include/crypto/algapi.h | 11 +++++++++-- 2 files changed, 12 insertions(+), 6 deletions(-) diff --git a/crypto/algapi.c b/crypto/algapi.c index 2545c5f89c4c..52ce3c5a0499 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -988,11 +988,10 @@ void crypto_inc(u8 *a, unsigned int size) __be32 *b = (__be32 *)(a + size); u32 c; - if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || - IS_ALIGNED((unsigned long)b, __alignof__(*b))) + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) for (; size >= 4; size -= 4) { - c = be32_to_cpu(*--b) + 1; - *b = cpu_to_be32(c); + c = get_unaligned_be32(--b) + 1; + put_unaligned_be32(c, b); if (likely(c)) return; } diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h index 4a5ad10e75f0..86267c232f34 100644 --- a/include/crypto/algapi.h +++ b/include/crypto/algapi.h @@ -17,6 +17,8 @@ #include <linux/kernel.h> #include <linux/skbuff.h> +#include <asm/unaligned.h> + /* * Maximum values for blocksize and alignmask, used to allocate * static buffers that are big enough for any combination of @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) unsigned long *s = (unsigned long *)src; while (size > 0) { - *d++ ^= *s++; + put_unaligned(get_unaligned(d) ^ get_unaligned(s), d); + d++; + s++; size -= sizeof(unsigned long); } } else { @@ -231,7 +235,10 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, unsigned long *s2 = (unsigned long *)src2; while (size > 0) { - *d++ = *s1++ ^ *s2++; + put_unaligned(get_unaligned(s1) ^ get_unaligned(s2), d); + d++; + s1++; + s2++; size -= sizeof(unsigned long); } } else { -- 2.11.0 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path 2018-10-08 21:15 ` Ard Biesheuvel @ 2018-10-09 3:47 ` Eric Biggers -1 siblings, 0 replies; 23+ messages in thread From: Eric Biggers @ 2018-10-09 3:47 UTC (permalink / raw) To: Ard Biesheuvel; +Cc: linux-arm-kernel, jason, linux-crypto, arnd, herbert Hi Ard, On Mon, Oct 08, 2018 at 11:15:53PM +0200, Ard Biesheuvel wrote: > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and load/store multiple instructions (ldrd, ldm) may still only > be used on memory addresses that are 32-bit aligned, and so we have to > use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we > may end up with a severe performance hit due to alignment traps that > require fixups by the kernel. > > Fortunately, the get_unaligned() accessors do the right thing: when > building for ARMv6 or later, the compiler will emit unaligned accesses > using the ordinary load/store instructions (but avoid the ones that > require 32-bit alignment). When building for older ARM, those accessors > will emit the appropriate sequence of ldrb/mov/orr instructions. And on > architectures that can truly tolerate any kind of misalignment, the > get_unaligned() accessors resolve to the leXX_to_cpup accessors that > operate on aligned addresses. > > So switch to the unaligned accessors for the aligned fast path. This > will create the exact same code on architectures that can really > tolerate any kind of misalignment, and generate code for ARMv6+ that > avoids load/store instructions that trigger alignment faults. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > --- > crypto/algapi.c | 7 +++---- > include/crypto/algapi.h | 11 +++++++++-- > 2 files changed, 12 insertions(+), 6 deletions(-) > > diff --git a/crypto/algapi.c b/crypto/algapi.c > index 2545c5f89c4c..52ce3c5a0499 100644 > --- a/crypto/algapi.c > +++ b/crypto/algapi.c > @@ -988,11 +988,10 @@ void crypto_inc(u8 *a, unsigned int size) > __be32 *b = (__be32 *)(a + size); > u32 c; > > - if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || > - IS_ALIGNED((unsigned long)b, __alignof__(*b))) > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) > for (; size >= 4; size -= 4) { > - c = be32_to_cpu(*--b) + 1; > - *b = cpu_to_be32(c); > + c = get_unaligned_be32(--b) + 1; > + put_unaligned_be32(c, b); > if (likely(c)) > return; > } > diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h > index 4a5ad10e75f0..86267c232f34 100644 > --- a/include/crypto/algapi.h > +++ b/include/crypto/algapi.h > @@ -17,6 +17,8 @@ > #include <linux/kernel.h> > #include <linux/skbuff.h> > > +#include <asm/unaligned.h> > + > /* > * Maximum values for blocksize and alignmask, used to allocate > * static buffers that are big enough for any combination of > @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) > unsigned long *s = (unsigned long *)src; > > while (size > 0) { > - *d++ ^= *s++; > + put_unaligned(get_unaligned(d) ^ get_unaligned(s), d); > + d++; > + s++; > size -= sizeof(unsigned long); > } > } else { > @@ -231,7 +235,10 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, > unsigned long *s2 = (unsigned long *)src2; > > while (size > 0) { > - *d++ = *s1++ ^ *s2++; > + put_unaligned(get_unaligned(s1) ^ get_unaligned(s2), d); > + d++; > + s1++; > + s2++; > size -= sizeof(unsigned long); > } > } else { > -- > 2.11.0 > Doesn't __crypto_xor() have the same problem too? - Eric ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path @ 2018-10-09 3:47 ` Eric Biggers 0 siblings, 0 replies; 23+ messages in thread From: Eric Biggers @ 2018-10-09 3:47 UTC (permalink / raw) To: linux-arm-kernel Hi Ard, On Mon, Oct 08, 2018 at 11:15:53PM +0200, Ard Biesheuvel wrote: > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and load/store multiple instructions (ldrd, ldm) may still only > be used on memory addresses that are 32-bit aligned, and so we have to > use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we > may end up with a severe performance hit due to alignment traps that > require fixups by the kernel. > > Fortunately, the get_unaligned() accessors do the right thing: when > building for ARMv6 or later, the compiler will emit unaligned accesses > using the ordinary load/store instructions (but avoid the ones that > require 32-bit alignment). When building for older ARM, those accessors > will emit the appropriate sequence of ldrb/mov/orr instructions. And on > architectures that can truly tolerate any kind of misalignment, the > get_unaligned() accessors resolve to the leXX_to_cpup accessors that > operate on aligned addresses. > > So switch to the unaligned accessors for the aligned fast path. This > will create the exact same code on architectures that can really > tolerate any kind of misalignment, and generate code for ARMv6+ that > avoids load/store instructions that trigger alignment faults. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > --- > crypto/algapi.c | 7 +++---- > include/crypto/algapi.h | 11 +++++++++-- > 2 files changed, 12 insertions(+), 6 deletions(-) > > diff --git a/crypto/algapi.c b/crypto/algapi.c > index 2545c5f89c4c..52ce3c5a0499 100644 > --- a/crypto/algapi.c > +++ b/crypto/algapi.c > @@ -988,11 +988,10 @@ void crypto_inc(u8 *a, unsigned int size) > __be32 *b = (__be32 *)(a + size); > u32 c; > > - if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || > - IS_ALIGNED((unsigned long)b, __alignof__(*b))) > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) > for (; size >= 4; size -= 4) { > - c = be32_to_cpu(*--b) + 1; > - *b = cpu_to_be32(c); > + c = get_unaligned_be32(--b) + 1; > + put_unaligned_be32(c, b); > if (likely(c)) > return; > } > diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h > index 4a5ad10e75f0..86267c232f34 100644 > --- a/include/crypto/algapi.h > +++ b/include/crypto/algapi.h > @@ -17,6 +17,8 @@ > #include <linux/kernel.h> > #include <linux/skbuff.h> > > +#include <asm/unaligned.h> > + > /* > * Maximum values for blocksize and alignmask, used to allocate > * static buffers that are big enough for any combination of > @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) > unsigned long *s = (unsigned long *)src; > > while (size > 0) { > - *d++ ^= *s++; > + put_unaligned(get_unaligned(d) ^ get_unaligned(s), d); > + d++; > + s++; > size -= sizeof(unsigned long); > } > } else { > @@ -231,7 +235,10 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, > unsigned long *s2 = (unsigned long *)src2; > > while (size > 0) { > - *d++ = *s1++ ^ *s2++; > + put_unaligned(get_unaligned(s1) ^ get_unaligned(s2), d); > + d++; > + s1++; > + s2++; > size -= sizeof(unsigned long); > } > } else { > -- > 2.11.0 > Doesn't __crypto_xor() have the same problem too? - Eric ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path 2018-10-09 3:47 ` Eric Biggers @ 2018-10-09 8:38 ` Ard Biesheuvel -1 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-09 8:38 UTC (permalink / raw) To: Eric Biggers, Arnd Bergmann Cc: Jason A. Donenfeld, open list:HARDWARE RANDOM NUMBER GENERATOR CORE, linux-arm-kernel, Herbert Xu On 9 October 2018 at 05:47, Eric Biggers <ebiggers@kernel.org> wrote: > Hi Ard, > > On Mon, Oct 08, 2018 at 11:15:53PM +0200, Ard Biesheuvel wrote: >> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> because the ordinary load/store instructions (ldr, ldrh, ldrb) can >> tolerate any misalignment of the memory address. However, load/store >> double and load/store multiple instructions (ldrd, ldm) may still only >> be used on memory addresses that are 32-bit aligned, and so we have to >> use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we >> may end up with a severe performance hit due to alignment traps that >> require fixups by the kernel. >> >> Fortunately, the get_unaligned() accessors do the right thing: when >> building for ARMv6 or later, the compiler will emit unaligned accesses >> using the ordinary load/store instructions (but avoid the ones that >> require 32-bit alignment). When building for older ARM, those accessors >> will emit the appropriate sequence of ldrb/mov/orr instructions. And on >> architectures that can truly tolerate any kind of misalignment, the >> get_unaligned() accessors resolve to the leXX_to_cpup accessors that >> operate on aligned addresses. >> >> So switch to the unaligned accessors for the aligned fast path. This >> will create the exact same code on architectures that can really >> tolerate any kind of misalignment, and generate code for ARMv6+ that >> avoids load/store instructions that trigger alignment faults. >> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> --- >> crypto/algapi.c | 7 +++---- >> include/crypto/algapi.h | 11 +++++++++-- >> 2 files changed, 12 insertions(+), 6 deletions(-) >> >> diff --git a/crypto/algapi.c b/crypto/algapi.c >> index 2545c5f89c4c..52ce3c5a0499 100644 >> --- a/crypto/algapi.c >> +++ b/crypto/algapi.c >> @@ -988,11 +988,10 @@ void crypto_inc(u8 *a, unsigned int size) >> __be32 *b = (__be32 *)(a + size); >> u32 c; >> >> - if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || >> - IS_ALIGNED((unsigned long)b, __alignof__(*b))) >> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) >> for (; size >= 4; size -= 4) { >> - c = be32_to_cpu(*--b) + 1; >> - *b = cpu_to_be32(c); >> + c = get_unaligned_be32(--b) + 1; >> + put_unaligned_be32(c, b); >> if (likely(c)) >> return; >> } >> diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h >> index 4a5ad10e75f0..86267c232f34 100644 >> --- a/include/crypto/algapi.h >> +++ b/include/crypto/algapi.h >> @@ -17,6 +17,8 @@ >> #include <linux/kernel.h> >> #include <linux/skbuff.h> >> >> +#include <asm/unaligned.h> >> + >> /* >> * Maximum values for blocksize and alignmask, used to allocate >> * static buffers that are big enough for any combination of >> @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) >> unsigned long *s = (unsigned long *)src; >> >> while (size > 0) { >> - *d++ ^= *s++; >> + put_unaligned(get_unaligned(d) ^ get_unaligned(s), d); >> + d++; >> + s++; >> size -= sizeof(unsigned long); >> } >> } else { >> @@ -231,7 +235,10 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, >> unsigned long *s2 = (unsigned long *)src2; >> >> while (size > 0) { >> - *d++ = *s1++ ^ *s2++; >> + put_unaligned(get_unaligned(s1) ^ get_unaligned(s2), d); >> + d++; >> + s1++; >> + s2++; >> size -= sizeof(unsigned long); >> } >> } else { >> -- >> 2.11.0 >> > > Doesn't __crypto_xor() have the same problem too? > More or less, and I was wondering what to do about it. To fix __crypto_xor() correctly, we'd have to duplicate the code path that operates on the u64[], u32[] and u16[] chunks, or we'll end up with suboptimal code that uses the accessors even if the alignment routine has executed first. This is the same issue Jason points out in siphash. Perhaps the answer is to add 'fast' unaligned accessors that may be used on unaligned quantities only if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set? E.g., #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS #define get_unaligned_fast get_unaligned #else #define get_unaligned_fast(x) (*(x)) #endif Arnd? ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 2/3] crypto: crypto_xor - use unaligned accessors for aligned fast path @ 2018-10-09 8:38 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-09 8:38 UTC (permalink / raw) To: linux-arm-kernel On 9 October 2018 at 05:47, Eric Biggers <ebiggers@kernel.org> wrote: > Hi Ard, > > On Mon, Oct 08, 2018 at 11:15:53PM +0200, Ard Biesheuvel wrote: >> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> because the ordinary load/store instructions (ldr, ldrh, ldrb) can >> tolerate any misalignment of the memory address. However, load/store >> double and load/store multiple instructions (ldrd, ldm) may still only >> be used on memory addresses that are 32-bit aligned, and so we have to >> use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we >> may end up with a severe performance hit due to alignment traps that >> require fixups by the kernel. >> >> Fortunately, the get_unaligned() accessors do the right thing: when >> building for ARMv6 or later, the compiler will emit unaligned accesses >> using the ordinary load/store instructions (but avoid the ones that >> require 32-bit alignment). When building for older ARM, those accessors >> will emit the appropriate sequence of ldrb/mov/orr instructions. And on >> architectures that can truly tolerate any kind of misalignment, the >> get_unaligned() accessors resolve to the leXX_to_cpup accessors that >> operate on aligned addresses. >> >> So switch to the unaligned accessors for the aligned fast path. This >> will create the exact same code on architectures that can really >> tolerate any kind of misalignment, and generate code for ARMv6+ that >> avoids load/store instructions that trigger alignment faults. >> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> --- >> crypto/algapi.c | 7 +++---- >> include/crypto/algapi.h | 11 +++++++++-- >> 2 files changed, 12 insertions(+), 6 deletions(-) >> >> diff --git a/crypto/algapi.c b/crypto/algapi.c >> index 2545c5f89c4c..52ce3c5a0499 100644 >> --- a/crypto/algapi.c >> +++ b/crypto/algapi.c >> @@ -988,11 +988,10 @@ void crypto_inc(u8 *a, unsigned int size) >> __be32 *b = (__be32 *)(a + size); >> u32 c; >> >> - if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || >> - IS_ALIGNED((unsigned long)b, __alignof__(*b))) >> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) >> for (; size >= 4; size -= 4) { >> - c = be32_to_cpu(*--b) + 1; >> - *b = cpu_to_be32(c); >> + c = get_unaligned_be32(--b) + 1; >> + put_unaligned_be32(c, b); >> if (likely(c)) >> return; >> } >> diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h >> index 4a5ad10e75f0..86267c232f34 100644 >> --- a/include/crypto/algapi.h >> +++ b/include/crypto/algapi.h >> @@ -17,6 +17,8 @@ >> #include <linux/kernel.h> >> #include <linux/skbuff.h> >> >> +#include <asm/unaligned.h> >> + >> /* >> * Maximum values for blocksize and alignmask, used to allocate >> * static buffers that are big enough for any combination of >> @@ -212,7 +214,9 @@ static inline void crypto_xor(u8 *dst, const u8 *src, unsigned int size) >> unsigned long *s = (unsigned long *)src; >> >> while (size > 0) { >> - *d++ ^= *s++; >> + put_unaligned(get_unaligned(d) ^ get_unaligned(s), d); >> + d++; >> + s++; >> size -= sizeof(unsigned long); >> } >> } else { >> @@ -231,7 +235,10 @@ static inline void crypto_xor_cpy(u8 *dst, const u8 *src1, const u8 *src2, >> unsigned long *s2 = (unsigned long *)src2; >> >> while (size > 0) { >> - *d++ = *s1++ ^ *s2++; >> + put_unaligned(get_unaligned(s1) ^ get_unaligned(s2), d); >> + d++; >> + s1++; >> + s2++; >> size -= sizeof(unsigned long); >> } >> } else { >> -- >> 2.11.0 >> > > Doesn't __crypto_xor() have the same problem too? > More or less, and I was wondering what to do about it. To fix __crypto_xor() correctly, we'd have to duplicate the code path that operates on the u64[], u32[] and u16[] chunks, or we'll end up with suboptimal code that uses the accessors even if the alignment routine has executed first. This is the same issue Jason points out in siphash. Perhaps the answer is to add 'fast' unaligned accessors that may be used on unaligned quantities only if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set? E.g., #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS #define get_unaligned_fast get_unaligned #else #define get_unaligned_fast(x) (*(x)) #endif Arnd? ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 3/3] crypto: siphash - drop _aligned variants 2018-10-08 21:15 ` Ard Biesheuvel @ 2018-10-08 21:15 ` Ard Biesheuvel -1 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-crypto Cc: jason, herbert, arnd, Ard Biesheuvel, ebiggers, linux-arm-kernel On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. Since the compiler will in fact emit ldrd or ldm instructions when building this code for ARM v6 or later, the solution is to use the unaligned accessors on the aligned code paths. Given the above, this either produces the same code, or better in the ARMv6+ case. However, since that removes the only difference between the aligned and unaligned variants, we can drop the aligned variant entirely. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- include/linux/siphash.h | 106 +++++++++----------- lib/siphash.c | 103 ++----------------- 2 files changed, 54 insertions(+), 155 deletions(-) diff --git a/include/linux/siphash.h b/include/linux/siphash.h index fa7a6b9cedbf..ef3c36b0ae0f 100644 --- a/include/linux/siphash.h +++ b/include/linux/siphash.h @@ -15,16 +15,14 @@ #include <linux/types.h> #include <linux/kernel.h> +#include <asm/unaligned.h> #define SIPHASH_ALIGNMENT __alignof__(u64) typedef struct { u64 key[2]; } siphash_key_t; -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); -#endif +u64 __siphash(const void *data, size_t len, const siphash_key_t *key); u64 siphash_1u64(const u64 a, const siphash_key_t *key); u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); @@ -48,26 +46,6 @@ static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, } -static inline u64 ___siphash_aligned(const __le64 *data, size_t len, - const siphash_key_t *key) -{ - if (__builtin_constant_p(len) && len == 4) - return siphash_1u32(le32_to_cpup((const __le32 *)data), key); - if (__builtin_constant_p(len) && len == 8) - return siphash_1u64(le64_to_cpu(data[0]), key); - if (__builtin_constant_p(len) && len == 16) - return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), - key); - if (__builtin_constant_p(len) && len == 24) - return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), - le64_to_cpu(data[2]), key); - if (__builtin_constant_p(len) && len == 32) - return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), - le64_to_cpu(data[2]), le64_to_cpu(data[3]), - key); - return __siphash_aligned(data, len, key); -} - /** * siphash - compute 64-bit siphash PRF value * @data: buffer to hash @@ -77,11 +55,30 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len, static inline u64 siphash(const void *data, size_t len, const siphash_key_t *key) { -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS - if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) - return __siphash_unaligned(data, len, key); -#endif - return ___siphash_aligned(data, len, key); + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { + if (__builtin_constant_p(len) && len == 4) + return siphash_1u32(get_unaligned_le32(data), + key); + if (__builtin_constant_p(len) && len == 8) + return siphash_1u64(get_unaligned_le64(data), + key); + if (__builtin_constant_p(len) && len == 16) + return siphash_2u64(get_unaligned_le64(data), + get_unaligned_le64(data + 8), + key); + if (__builtin_constant_p(len) && len == 24) + return siphash_3u64(get_unaligned_le64(data), + get_unaligned_le64(data + 8), + get_unaligned_le64(data + 16), + key); + if (__builtin_constant_p(len) && len == 32) + return siphash_4u64(get_unaligned_le64(data), + get_unaligned_le64(data + 8), + get_unaligned_le64(data + 16), + get_unaligned_le64(data + 24), + key); + } + return __siphash(data, len, key); } #define HSIPHASH_ALIGNMENT __alignof__(unsigned long) @@ -89,12 +86,7 @@ typedef struct { unsigned long key[2]; } hsiphash_key_t; -u32 __hsiphash_aligned(const void *data, size_t len, - const hsiphash_key_t *key); -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u32 __hsiphash_unaligned(const void *data, size_t len, - const hsiphash_key_t *key); -#endif +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key); u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); @@ -103,24 +95,6 @@ u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, const hsiphash_key_t *key); -static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, - const hsiphash_key_t *key) -{ - if (__builtin_constant_p(len) && len == 4) - return hsiphash_1u32(le32_to_cpu(data[0]), key); - if (__builtin_constant_p(len) && len == 8) - return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), - key); - if (__builtin_constant_p(len) && len == 12) - return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), - le32_to_cpu(data[2]), key); - if (__builtin_constant_p(len) && len == 16) - return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), - le32_to_cpu(data[2]), le32_to_cpu(data[3]), - key); - return __hsiphash_aligned(data, len, key); -} - /** * hsiphash - compute 32-bit hsiphash PRF value * @data: buffer to hash @@ -130,11 +104,27 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, static inline u32 hsiphash(const void *data, size_t len, const hsiphash_key_t *key) { -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS - if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) - return __hsiphash_unaligned(data, len, key); -#endif - return ___hsiphash_aligned(data, len, key); + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { + if (__builtin_constant_p(len) && len == 4) + return hsiphash_1u32(get_unaligned_le32(data), + key); + if (__builtin_constant_p(len) && len == 8) + return hsiphash_2u32(get_unaligned_le32(data), + get_unaligned_le32(data + 4), + key); + if (__builtin_constant_p(len) && len == 12) + return hsiphash_3u32(get_unaligned_le32(data), + get_unaligned_le32(data + 4), + get_unaligned_le32(data + 8), + key); + if (__builtin_constant_p(len) && len == 16) + return hsiphash_4u32(get_unaligned_le32(data), + get_unaligned_le32(data + 4), + get_unaligned_le32(data + 8), + get_unaligned_le32(data + 12), + key); + } + return __hsiphash(data, len, key); } #endif /* _LINUX_SIPHASH_H */ diff --git a/lib/siphash.c b/lib/siphash.c index 3ae58b4edad6..3b2ba1a10ad9 100644 --- a/lib/siphash.c +++ b/lib/siphash.c @@ -49,40 +49,7 @@ SIPROUND; \ return (v0 ^ v1) ^ (v2 ^ v3); -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) -{ - const u8 *end = data + len - (len % sizeof(u64)); - const u8 left = len & (sizeof(u64) - 1); - u64 m; - PREAMBLE(len) - for (; data != end; data += sizeof(u64)) { - m = le64_to_cpup(data); - v3 ^= m; - SIPROUND; - SIPROUND; - v0 ^= m; - } -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 - if (left) - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & - bytemask_from_count(left))); -#else - switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; - case 4: b |= le32_to_cpup(data); break; - case 3: b |= ((u64)end[2]) << 16; - case 2: b |= le16_to_cpup(data); break; - case 1: b |= end[0]; - } -#endif - POSTAMBLE -} -EXPORT_SYMBOL(__siphash_aligned); - -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) +u64 __siphash(const void *data, size_t len, const siphash_key_t *key) { const u8 *end = data + len - (len % sizeof(u64)); const u8 left = len & (sizeof(u64) - 1); @@ -112,8 +79,7 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) #endif POSTAMBLE } -EXPORT_SYMBOL(__siphash_unaligned); -#endif +EXPORT_SYMBOL(__siphash); /** * siphash_1u64 - compute 64-bit siphash PRF value of a u64 @@ -250,39 +216,7 @@ EXPORT_SYMBOL(siphash_3u32); HSIPROUND; \ return (v0 ^ v1) ^ (v2 ^ v3); -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) -{ - const u8 *end = data + len - (len % sizeof(u64)); - const u8 left = len & (sizeof(u64) - 1); - u64 m; - HPREAMBLE(len) - for (; data != end; data += sizeof(u64)) { - m = le64_to_cpup(data); - v3 ^= m; - HSIPROUND; - v0 ^= m; - } -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 - if (left) - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & - bytemask_from_count(left))); -#else - switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; - case 4: b |= le32_to_cpup(data); break; - case 3: b |= ((u64)end[2]) << 16; - case 2: b |= le16_to_cpup(data); break; - case 1: b |= end[0]; - } -#endif - HPOSTAMBLE -} -EXPORT_SYMBOL(__hsiphash_aligned); - -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u32 __hsiphash_unaligned(const void *data, size_t len, +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) { const u8 *end = data + len - (len % sizeof(u64)); @@ -312,8 +246,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, #endif HPOSTAMBLE } -EXPORT_SYMBOL(__hsiphash_unaligned); -#endif +EXPORT_SYMBOL(__hsiphash); /** * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 @@ -418,30 +351,7 @@ EXPORT_SYMBOL(hsiphash_4u32); HSIPROUND; \ return v1 ^ v3; -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) -{ - const u8 *end = data + len - (len % sizeof(u32)); - const u8 left = len & (sizeof(u32) - 1); - u32 m; - HPREAMBLE(len) - for (; data != end; data += sizeof(u32)) { - m = le32_to_cpup(data); - v3 ^= m; - HSIPROUND; - v0 ^= m; - } - switch (left) { - case 3: b |= ((u32)end[2]) << 16; - case 2: b |= le16_to_cpup(data); break; - case 1: b |= end[0]; - } - HPOSTAMBLE -} -EXPORT_SYMBOL(__hsiphash_aligned); - -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u32 __hsiphash_unaligned(const void *data, size_t len, - const hsiphash_key_t *key) +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) { const u8 *end = data + len - (len % sizeof(u32)); const u8 left = len & (sizeof(u32) - 1); @@ -460,8 +370,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, } HPOSTAMBLE } -EXPORT_SYMBOL(__hsiphash_unaligned); -#endif +EXPORT_SYMBOL(__hsiphash); /** * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 -- 2.11.0 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH 3/3] crypto: siphash - drop _aligned variants @ 2018-10-08 21:15 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-08 21:15 UTC (permalink / raw) To: linux-arm-kernel On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. Since the compiler will in fact emit ldrd or ldm instructions when building this code for ARM v6 or later, the solution is to use the unaligned accessors on the aligned code paths. Given the above, this either produces the same code, or better in the ARMv6+ case. However, since that removes the only difference between the aligned and unaligned variants, we can drop the aligned variant entirely. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- include/linux/siphash.h | 106 +++++++++----------- lib/siphash.c | 103 ++----------------- 2 files changed, 54 insertions(+), 155 deletions(-) diff --git a/include/linux/siphash.h b/include/linux/siphash.h index fa7a6b9cedbf..ef3c36b0ae0f 100644 --- a/include/linux/siphash.h +++ b/include/linux/siphash.h @@ -15,16 +15,14 @@ #include <linux/types.h> #include <linux/kernel.h> +#include <asm/unaligned.h> #define SIPHASH_ALIGNMENT __alignof__(u64) typedef struct { u64 key[2]; } siphash_key_t; -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); -#endif +u64 __siphash(const void *data, size_t len, const siphash_key_t *key); u64 siphash_1u64(const u64 a, const siphash_key_t *key); u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); @@ -48,26 +46,6 @@ static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, } -static inline u64 ___siphash_aligned(const __le64 *data, size_t len, - const siphash_key_t *key) -{ - if (__builtin_constant_p(len) && len == 4) - return siphash_1u32(le32_to_cpup((const __le32 *)data), key); - if (__builtin_constant_p(len) && len == 8) - return siphash_1u64(le64_to_cpu(data[0]), key); - if (__builtin_constant_p(len) && len == 16) - return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), - key); - if (__builtin_constant_p(len) && len == 24) - return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), - le64_to_cpu(data[2]), key); - if (__builtin_constant_p(len) && len == 32) - return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), - le64_to_cpu(data[2]), le64_to_cpu(data[3]), - key); - return __siphash_aligned(data, len, key); -} - /** * siphash - compute 64-bit siphash PRF value * @data: buffer to hash @@ -77,11 +55,30 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len, static inline u64 siphash(const void *data, size_t len, const siphash_key_t *key) { -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS - if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) - return __siphash_unaligned(data, len, key); -#endif - return ___siphash_aligned(data, len, key); + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { + if (__builtin_constant_p(len) && len == 4) + return siphash_1u32(get_unaligned_le32(data), + key); + if (__builtin_constant_p(len) && len == 8) + return siphash_1u64(get_unaligned_le64(data), + key); + if (__builtin_constant_p(len) && len == 16) + return siphash_2u64(get_unaligned_le64(data), + get_unaligned_le64(data + 8), + key); + if (__builtin_constant_p(len) && len == 24) + return siphash_3u64(get_unaligned_le64(data), + get_unaligned_le64(data + 8), + get_unaligned_le64(data + 16), + key); + if (__builtin_constant_p(len) && len == 32) + return siphash_4u64(get_unaligned_le64(data), + get_unaligned_le64(data + 8), + get_unaligned_le64(data + 16), + get_unaligned_le64(data + 24), + key); + } + return __siphash(data, len, key); } #define HSIPHASH_ALIGNMENT __alignof__(unsigned long) @@ -89,12 +86,7 @@ typedef struct { unsigned long key[2]; } hsiphash_key_t; -u32 __hsiphash_aligned(const void *data, size_t len, - const hsiphash_key_t *key); -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u32 __hsiphash_unaligned(const void *data, size_t len, - const hsiphash_key_t *key); -#endif +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key); u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); @@ -103,24 +95,6 @@ u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, const hsiphash_key_t *key); -static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, - const hsiphash_key_t *key) -{ - if (__builtin_constant_p(len) && len == 4) - return hsiphash_1u32(le32_to_cpu(data[0]), key); - if (__builtin_constant_p(len) && len == 8) - return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), - key); - if (__builtin_constant_p(len) && len == 12) - return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), - le32_to_cpu(data[2]), key); - if (__builtin_constant_p(len) && len == 16) - return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), - le32_to_cpu(data[2]), le32_to_cpu(data[3]), - key); - return __hsiphash_aligned(data, len, key); -} - /** * hsiphash - compute 32-bit hsiphash PRF value * @data: buffer to hash @@ -130,11 +104,27 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, static inline u32 hsiphash(const void *data, size_t len, const hsiphash_key_t *key) { -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS - if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) - return __hsiphash_unaligned(data, len, key); -#endif - return ___hsiphash_aligned(data, len, key); + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { + if (__builtin_constant_p(len) && len == 4) + return hsiphash_1u32(get_unaligned_le32(data), + key); + if (__builtin_constant_p(len) && len == 8) + return hsiphash_2u32(get_unaligned_le32(data), + get_unaligned_le32(data + 4), + key); + if (__builtin_constant_p(len) && len == 12) + return hsiphash_3u32(get_unaligned_le32(data), + get_unaligned_le32(data + 4), + get_unaligned_le32(data + 8), + key); + if (__builtin_constant_p(len) && len == 16) + return hsiphash_4u32(get_unaligned_le32(data), + get_unaligned_le32(data + 4), + get_unaligned_le32(data + 8), + get_unaligned_le32(data + 12), + key); + } + return __hsiphash(data, len, key); } #endif /* _LINUX_SIPHASH_H */ diff --git a/lib/siphash.c b/lib/siphash.c index 3ae58b4edad6..3b2ba1a10ad9 100644 --- a/lib/siphash.c +++ b/lib/siphash.c @@ -49,40 +49,7 @@ SIPROUND; \ return (v0 ^ v1) ^ (v2 ^ v3); -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) -{ - const u8 *end = data + len - (len % sizeof(u64)); - const u8 left = len & (sizeof(u64) - 1); - u64 m; - PREAMBLE(len) - for (; data != end; data += sizeof(u64)) { - m = le64_to_cpup(data); - v3 ^= m; - SIPROUND; - SIPROUND; - v0 ^= m; - } -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 - if (left) - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & - bytemask_from_count(left))); -#else - switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; - case 4: b |= le32_to_cpup(data); break; - case 3: b |= ((u64)end[2]) << 16; - case 2: b |= le16_to_cpup(data); break; - case 1: b |= end[0]; - } -#endif - POSTAMBLE -} -EXPORT_SYMBOL(__siphash_aligned); - -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) +u64 __siphash(const void *data, size_t len, const siphash_key_t *key) { const u8 *end = data + len - (len % sizeof(u64)); const u8 left = len & (sizeof(u64) - 1); @@ -112,8 +79,7 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) #endif POSTAMBLE } -EXPORT_SYMBOL(__siphash_unaligned); -#endif +EXPORT_SYMBOL(__siphash); /** * siphash_1u64 - compute 64-bit siphash PRF value of a u64 @@ -250,39 +216,7 @@ EXPORT_SYMBOL(siphash_3u32); HSIPROUND; \ return (v0 ^ v1) ^ (v2 ^ v3); -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) -{ - const u8 *end = data + len - (len % sizeof(u64)); - const u8 left = len & (sizeof(u64) - 1); - u64 m; - HPREAMBLE(len) - for (; data != end; data += sizeof(u64)) { - m = le64_to_cpup(data); - v3 ^= m; - HSIPROUND; - v0 ^= m; - } -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 - if (left) - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & - bytemask_from_count(left))); -#else - switch (left) { - case 7: b |= ((u64)end[6]) << 48; - case 6: b |= ((u64)end[5]) << 40; - case 5: b |= ((u64)end[4]) << 32; - case 4: b |= le32_to_cpup(data); break; - case 3: b |= ((u64)end[2]) << 16; - case 2: b |= le16_to_cpup(data); break; - case 1: b |= end[0]; - } -#endif - HPOSTAMBLE -} -EXPORT_SYMBOL(__hsiphash_aligned); - -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u32 __hsiphash_unaligned(const void *data, size_t len, +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) { const u8 *end = data + len - (len % sizeof(u64)); @@ -312,8 +246,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, #endif HPOSTAMBLE } -EXPORT_SYMBOL(__hsiphash_unaligned); -#endif +EXPORT_SYMBOL(__hsiphash); /** * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 @@ -418,30 +351,7 @@ EXPORT_SYMBOL(hsiphash_4u32); HSIPROUND; \ return v1 ^ v3; -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) -{ - const u8 *end = data + len - (len % sizeof(u32)); - const u8 left = len & (sizeof(u32) - 1); - u32 m; - HPREAMBLE(len) - for (; data != end; data += sizeof(u32)) { - m = le32_to_cpup(data); - v3 ^= m; - HSIPROUND; - v0 ^= m; - } - switch (left) { - case 3: b |= ((u32)end[2]) << 16; - case 2: b |= le16_to_cpup(data); break; - case 1: b |= end[0]; - } - HPOSTAMBLE -} -EXPORT_SYMBOL(__hsiphash_aligned); - -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS -u32 __hsiphash_unaligned(const void *data, size_t len, - const hsiphash_key_t *key) +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) { const u8 *end = data + len - (len % sizeof(u32)); const u8 left = len & (sizeof(u32) - 1); @@ -460,8 +370,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, } HPOSTAMBLE } -EXPORT_SYMBOL(__hsiphash_unaligned); -#endif +EXPORT_SYMBOL(__hsiphash); /** * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 -- 2.11.0 ^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH 3/3] crypto: siphash - drop _aligned variants 2018-10-08 21:15 ` Ard Biesheuvel @ 2018-10-09 4:11 ` Jason A. Donenfeld -1 siblings, 0 replies; 23+ messages in thread From: Jason A. Donenfeld @ 2018-10-09 4:11 UTC (permalink / raw) To: Ard Biesheuvel Cc: Linux Crypto Mailing List, Herbert Xu, Arnd Bergmann, Eric Biggers, linux-arm-kernel, LKML, linux-mips Hi Ard, On Mon, Oct 8, 2018 at 11:16 PM Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and load/store multiple instructions (ldrd, ldm) may still only > be used on memory addresses that are 32-bit aligned, and so we have to > use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we > may end up with a severe performance hit due to alignment traps that > require fixups by the kernel. > > Fortunately, the get_unaligned() accessors do the right thing: when > building for ARMv6 or later, the compiler will emit unaligned accesses > using the ordinary load/store instructions (but avoid the ones that > require 32-bit alignment). When building for older ARM, those accessors > will emit the appropriate sequence of ldrb/mov/orr instructions. And on > architectures that can truly tolerate any kind of misalignment, the > get_unaligned() accessors resolve to the leXX_to_cpup accessors that > operate on aligned addresses. > > Since the compiler will in fact emit ldrd or ldm instructions when > building this code for ARM v6 or later, the solution is to use the > unaligned accessors on the aligned code paths. Given the above, this > either produces the same code, or better in the ARMv6+ case. However, > since that removes the only difference between the aligned and unaligned > variants, we can drop the aligned variant entirely. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > --- > include/linux/siphash.h | 106 +++++++++----------- > lib/siphash.c | 103 ++----------------- > 2 files changed, 54 insertions(+), 155 deletions(-) > > diff --git a/include/linux/siphash.h b/include/linux/siphash.h > index fa7a6b9cedbf..ef3c36b0ae0f 100644 > --- a/include/linux/siphash.h > +++ b/include/linux/siphash.h > @@ -15,16 +15,14 @@ > > #include <linux/types.h> > #include <linux/kernel.h> > +#include <asm/unaligned.h> > > #define SIPHASH_ALIGNMENT __alignof__(u64) > typedef struct { > u64 key[2]; > } siphash_key_t; > > -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); > -#endif > +u64 __siphash(const void *data, size_t len, const siphash_key_t *key); > > u64 siphash_1u64(const u64 a, const siphash_key_t *key); > u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); > @@ -48,26 +46,6 @@ static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, > } > > > -static inline u64 ___siphash_aligned(const __le64 *data, size_t len, > - const siphash_key_t *key) > -{ > - if (__builtin_constant_p(len) && len == 4) > - return siphash_1u32(le32_to_cpup((const __le32 *)data), key); > - if (__builtin_constant_p(len) && len == 8) > - return siphash_1u64(le64_to_cpu(data[0]), key); > - if (__builtin_constant_p(len) && len == 16) > - return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), > - key); > - if (__builtin_constant_p(len) && len == 24) > - return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), > - le64_to_cpu(data[2]), key); > - if (__builtin_constant_p(len) && len == 32) > - return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), > - le64_to_cpu(data[2]), le64_to_cpu(data[3]), > - key); > - return __siphash_aligned(data, len, key); > -} > - > /** > * siphash - compute 64-bit siphash PRF value > * @data: buffer to hash > @@ -77,11 +55,30 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len, > static inline u64 siphash(const void *data, size_t len, > const siphash_key_t *key) > { > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > - if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) > - return __siphash_unaligned(data, len, key); > -#endif > - return ___siphash_aligned(data, len, key); > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { > + if (__builtin_constant_p(len) && len == 4) > + return siphash_1u32(get_unaligned_le32(data), > + key); > + if (__builtin_constant_p(len) && len == 8) > + return siphash_1u64(get_unaligned_le64(data), > + key); > + if (__builtin_constant_p(len) && len == 16) > + return siphash_2u64(get_unaligned_le64(data), > + get_unaligned_le64(data + 8), > + key); > + if (__builtin_constant_p(len) && len == 24) > + return siphash_3u64(get_unaligned_le64(data), > + get_unaligned_le64(data + 8), > + get_unaligned_le64(data + 16), > + key); > + if (__builtin_constant_p(len) && len == 32) > + return siphash_4u64(get_unaligned_le64(data), > + get_unaligned_le64(data + 8), > + get_unaligned_le64(data + 16), > + get_unaligned_le64(data + 24), > + key); > + } > + return __siphash(data, len, key); > } > > #define HSIPHASH_ALIGNMENT __alignof__(unsigned long) > @@ -89,12 +86,7 @@ typedef struct { > unsigned long key[2]; > } hsiphash_key_t; > > -u32 __hsiphash_aligned(const void *data, size_t len, > - const hsiphash_key_t *key); > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u32 __hsiphash_unaligned(const void *data, size_t len, > - const hsiphash_key_t *key); > -#endif > +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key); > > u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); > u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); > @@ -103,24 +95,6 @@ u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, > u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, > const hsiphash_key_t *key); > > -static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, > - const hsiphash_key_t *key) > -{ > - if (__builtin_constant_p(len) && len == 4) > - return hsiphash_1u32(le32_to_cpu(data[0]), key); > - if (__builtin_constant_p(len) && len == 8) > - return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), > - key); > - if (__builtin_constant_p(len) && len == 12) > - return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), > - le32_to_cpu(data[2]), key); > - if (__builtin_constant_p(len) && len == 16) > - return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), > - le32_to_cpu(data[2]), le32_to_cpu(data[3]), > - key); > - return __hsiphash_aligned(data, len, key); > -} > - > /** > * hsiphash - compute 32-bit hsiphash PRF value > * @data: buffer to hash > @@ -130,11 +104,27 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, > static inline u32 hsiphash(const void *data, size_t len, > const hsiphash_key_t *key) > { > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > - if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) > - return __hsiphash_unaligned(data, len, key); > -#endif > - return ___hsiphash_aligned(data, len, key); > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { > + if (__builtin_constant_p(len) && len == 4) > + return hsiphash_1u32(get_unaligned_le32(data), > + key); > + if (__builtin_constant_p(len) && len == 8) > + return hsiphash_2u32(get_unaligned_le32(data), > + get_unaligned_le32(data + 4), > + key); > + if (__builtin_constant_p(len) && len == 12) > + return hsiphash_3u32(get_unaligned_le32(data), > + get_unaligned_le32(data + 4), > + get_unaligned_le32(data + 8), > + key); > + if (__builtin_constant_p(len) && len == 16) > + return hsiphash_4u32(get_unaligned_le32(data), > + get_unaligned_le32(data + 4), > + get_unaligned_le32(data + 8), > + get_unaligned_le32(data + 12), > + key); > + } > + return __hsiphash(data, len, key); > } > > #endif /* _LINUX_SIPHASH_H */ > diff --git a/lib/siphash.c b/lib/siphash.c > index 3ae58b4edad6..3b2ba1a10ad9 100644 > --- a/lib/siphash.c > +++ b/lib/siphash.c > @@ -49,40 +49,7 @@ > SIPROUND; \ > return (v0 ^ v1) ^ (v2 ^ v3); > > -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) > -{ > - const u8 *end = data + len - (len % sizeof(u64)); > - const u8 left = len & (sizeof(u64) - 1); > - u64 m; > - PREAMBLE(len) > - for (; data != end; data += sizeof(u64)) { > - m = le64_to_cpup(data); > - v3 ^= m; > - SIPROUND; > - SIPROUND; > - v0 ^= m; > - } > -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 > - if (left) > - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & > - bytemask_from_count(left))); > -#else > - switch (left) { > - case 7: b |= ((u64)end[6]) << 48; > - case 6: b |= ((u64)end[5]) << 40; > - case 5: b |= ((u64)end[4]) << 32; > - case 4: b |= le32_to_cpup(data); break; > - case 3: b |= ((u64)end[2]) << 16; > - case 2: b |= le16_to_cpup(data); break; > - case 1: b |= end[0]; > - } > -#endif > - POSTAMBLE > -} > -EXPORT_SYMBOL(__siphash_aligned); > - > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) > +u64 __siphash(const void *data, size_t len, const siphash_key_t *key) > { > const u8 *end = data + len - (len % sizeof(u64)); > const u8 left = len & (sizeof(u64) - 1); > @@ -112,8 +79,7 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) > #endif > POSTAMBLE > } > -EXPORT_SYMBOL(__siphash_unaligned); > -#endif > +EXPORT_SYMBOL(__siphash); > > /** > * siphash_1u64 - compute 64-bit siphash PRF value of a u64 > @@ -250,39 +216,7 @@ EXPORT_SYMBOL(siphash_3u32); > HSIPROUND; \ > return (v0 ^ v1) ^ (v2 ^ v3); > > -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) > -{ > - const u8 *end = data + len - (len % sizeof(u64)); > - const u8 left = len & (sizeof(u64) - 1); > - u64 m; > - HPREAMBLE(len) > - for (; data != end; data += sizeof(u64)) { > - m = le64_to_cpup(data); > - v3 ^= m; > - HSIPROUND; > - v0 ^= m; > - } > -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 > - if (left) > - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & > - bytemask_from_count(left))); > -#else > - switch (left) { > - case 7: b |= ((u64)end[6]) << 48; > - case 6: b |= ((u64)end[5]) << 40; > - case 5: b |= ((u64)end[4]) << 32; > - case 4: b |= le32_to_cpup(data); break; > - case 3: b |= ((u64)end[2]) << 16; > - case 2: b |= le16_to_cpup(data); break; > - case 1: b |= end[0]; > - } > -#endif > - HPOSTAMBLE > -} > -EXPORT_SYMBOL(__hsiphash_aligned); > - > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u32 __hsiphash_unaligned(const void *data, size_t len, > +u32 __hsiphash(const void *data, size_t len, > const hsiphash_key_t *key) > { > const u8 *end = data + len - (len % sizeof(u64)); > @@ -312,8 +246,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, > #endif > HPOSTAMBLE > } > -EXPORT_SYMBOL(__hsiphash_unaligned); > -#endif > +EXPORT_SYMBOL(__hsiphash); > > /** > * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 > @@ -418,30 +351,7 @@ EXPORT_SYMBOL(hsiphash_4u32); > HSIPROUND; \ > return v1 ^ v3; > > -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) > -{ > - const u8 *end = data + len - (len % sizeof(u32)); > - const u8 left = len & (sizeof(u32) - 1); > - u32 m; > - HPREAMBLE(len) > - for (; data != end; data += sizeof(u32)) { > - m = le32_to_cpup(data); > - v3 ^= m; > - HSIPROUND; > - v0 ^= m; > - } > - switch (left) { > - case 3: b |= ((u32)end[2]) << 16; > - case 2: b |= le16_to_cpup(data); break; > - case 1: b |= end[0]; > - } > - HPOSTAMBLE > -} > -EXPORT_SYMBOL(__hsiphash_aligned); > - > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u32 __hsiphash_unaligned(const void *data, size_t len, > - const hsiphash_key_t *key) > +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) > { > const u8 *end = data + len - (len % sizeof(u32)); > const u8 left = len & (sizeof(u32) - 1); > @@ -460,8 +370,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, > } > HPOSTAMBLE > } > -EXPORT_SYMBOL(__hsiphash_unaligned); > -#endif > +EXPORT_SYMBOL(__hsiphash); > > /** > * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 > -- > 2.11.0 > As you might expect, when compiling in __siphash_unaligned and __siphash_aligned on the x86 at the same time, __siphash_unaligned is replaced with just "jmp __siphash_aligned", as gcc recognized that indeed the same code is generated. However, on platforms where get_unaligned_* does do something different, it looks to me like this patch now always calls the unaligned code, even when the input data _is_ an aligned address already, which is worse behaviour than before. While it would be possible for the get_unaligned_* function headers to also detect this and fallback to the faster version at compile time, by the time get_unaligned_* is used in this patch, it's no longer in the header, but rather in siphash.c, which means the compiler no longer knows that the address is aligned, and so we hit the slow path. This especially impacts architectures like MIPS, for example. This is why the original code, prior to this patch, checks the alignment in the .h and then selects which codepath afterwards. So while this patch might handle the ARM use case, it seems like a regression on all other platforms. See, for example, the struct passing in net/core/secure_seq.c, which sends intentionally aligned and packed structs to siphash, which then benefits from using the faster instructions on certain platforms. It seems like what you're grappling with on the ARM side of things is that CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only half means what it says on some ISAs, complicating this logic. It seems like the ideal thing to do, given that, would be to just not set CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on those, so that we can fall back to the unaligned path always, like this patch suggests. Or if that's _too_ drastic, perhaps introduce another variable like CONFIG_MOSTLY_EFFICIENT_UNALIGNED_ACCESS. By the way, have you confirmed that the compiler actually does emit ldrd and ldm here? Jason ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 3/3] crypto: siphash - drop _aligned variants @ 2018-10-09 4:11 ` Jason A. Donenfeld 0 siblings, 0 replies; 23+ messages in thread From: Jason A. Donenfeld @ 2018-10-09 4:11 UTC (permalink / raw) To: linux-arm-kernel Hi Ard, On Mon, Oct 8, 2018 at 11:16 PM Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > because the ordinary load/store instructions (ldr, ldrh, ldrb) can > tolerate any misalignment of the memory address. However, load/store > double and load/store multiple instructions (ldrd, ldm) may still only > be used on memory addresses that are 32-bit aligned, and so we have to > use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we > may end up with a severe performance hit due to alignment traps that > require fixups by the kernel. > > Fortunately, the get_unaligned() accessors do the right thing: when > building for ARMv6 or later, the compiler will emit unaligned accesses > using the ordinary load/store instructions (but avoid the ones that > require 32-bit alignment). When building for older ARM, those accessors > will emit the appropriate sequence of ldrb/mov/orr instructions. And on > architectures that can truly tolerate any kind of misalignment, the > get_unaligned() accessors resolve to the leXX_to_cpup accessors that > operate on aligned addresses. > > Since the compiler will in fact emit ldrd or ldm instructions when > building this code for ARM v6 or later, the solution is to use the > unaligned accessors on the aligned code paths. Given the above, this > either produces the same code, or better in the ARMv6+ case. However, > since that removes the only difference between the aligned and unaligned > variants, we can drop the aligned variant entirely. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > --- > include/linux/siphash.h | 106 +++++++++----------- > lib/siphash.c | 103 ++----------------- > 2 files changed, 54 insertions(+), 155 deletions(-) > > diff --git a/include/linux/siphash.h b/include/linux/siphash.h > index fa7a6b9cedbf..ef3c36b0ae0f 100644 > --- a/include/linux/siphash.h > +++ b/include/linux/siphash.h > @@ -15,16 +15,14 @@ > > #include <linux/types.h> > #include <linux/kernel.h> > +#include <asm/unaligned.h> > > #define SIPHASH_ALIGNMENT __alignof__(u64) > typedef struct { > u64 key[2]; > } siphash_key_t; > > -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); > -#endif > +u64 __siphash(const void *data, size_t len, const siphash_key_t *key); > > u64 siphash_1u64(const u64 a, const siphash_key_t *key); > u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); > @@ -48,26 +46,6 @@ static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, > } > > > -static inline u64 ___siphash_aligned(const __le64 *data, size_t len, > - const siphash_key_t *key) > -{ > - if (__builtin_constant_p(len) && len == 4) > - return siphash_1u32(le32_to_cpup((const __le32 *)data), key); > - if (__builtin_constant_p(len) && len == 8) > - return siphash_1u64(le64_to_cpu(data[0]), key); > - if (__builtin_constant_p(len) && len == 16) > - return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), > - key); > - if (__builtin_constant_p(len) && len == 24) > - return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), > - le64_to_cpu(data[2]), key); > - if (__builtin_constant_p(len) && len == 32) > - return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), > - le64_to_cpu(data[2]), le64_to_cpu(data[3]), > - key); > - return __siphash_aligned(data, len, key); > -} > - > /** > * siphash - compute 64-bit siphash PRF value > * @data: buffer to hash > @@ -77,11 +55,30 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len, > static inline u64 siphash(const void *data, size_t len, > const siphash_key_t *key) > { > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > - if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) > - return __siphash_unaligned(data, len, key); > -#endif > - return ___siphash_aligned(data, len, key); > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { > + if (__builtin_constant_p(len) && len == 4) > + return siphash_1u32(get_unaligned_le32(data), > + key); > + if (__builtin_constant_p(len) && len == 8) > + return siphash_1u64(get_unaligned_le64(data), > + key); > + if (__builtin_constant_p(len) && len == 16) > + return siphash_2u64(get_unaligned_le64(data), > + get_unaligned_le64(data + 8), > + key); > + if (__builtin_constant_p(len) && len == 24) > + return siphash_3u64(get_unaligned_le64(data), > + get_unaligned_le64(data + 8), > + get_unaligned_le64(data + 16), > + key); > + if (__builtin_constant_p(len) && len == 32) > + return siphash_4u64(get_unaligned_le64(data), > + get_unaligned_le64(data + 8), > + get_unaligned_le64(data + 16), > + get_unaligned_le64(data + 24), > + key); > + } > + return __siphash(data, len, key); > } > > #define HSIPHASH_ALIGNMENT __alignof__(unsigned long) > @@ -89,12 +86,7 @@ typedef struct { > unsigned long key[2]; > } hsiphash_key_t; > > -u32 __hsiphash_aligned(const void *data, size_t len, > - const hsiphash_key_t *key); > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u32 __hsiphash_unaligned(const void *data, size_t len, > - const hsiphash_key_t *key); > -#endif > +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key); > > u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); > u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); > @@ -103,24 +95,6 @@ u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, > u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, > const hsiphash_key_t *key); > > -static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, > - const hsiphash_key_t *key) > -{ > - if (__builtin_constant_p(len) && len == 4) > - return hsiphash_1u32(le32_to_cpu(data[0]), key); > - if (__builtin_constant_p(len) && len == 8) > - return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), > - key); > - if (__builtin_constant_p(len) && len == 12) > - return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), > - le32_to_cpu(data[2]), key); > - if (__builtin_constant_p(len) && len == 16) > - return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), > - le32_to_cpu(data[2]), le32_to_cpu(data[3]), > - key); > - return __hsiphash_aligned(data, len, key); > -} > - > /** > * hsiphash - compute 32-bit hsiphash PRF value > * @data: buffer to hash > @@ -130,11 +104,27 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, > static inline u32 hsiphash(const void *data, size_t len, > const hsiphash_key_t *key) > { > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > - if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) > - return __hsiphash_unaligned(data, len, key); > -#endif > - return ___hsiphash_aligned(data, len, key); > + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { > + if (__builtin_constant_p(len) && len == 4) > + return hsiphash_1u32(get_unaligned_le32(data), > + key); > + if (__builtin_constant_p(len) && len == 8) > + return hsiphash_2u32(get_unaligned_le32(data), > + get_unaligned_le32(data + 4), > + key); > + if (__builtin_constant_p(len) && len == 12) > + return hsiphash_3u32(get_unaligned_le32(data), > + get_unaligned_le32(data + 4), > + get_unaligned_le32(data + 8), > + key); > + if (__builtin_constant_p(len) && len == 16) > + return hsiphash_4u32(get_unaligned_le32(data), > + get_unaligned_le32(data + 4), > + get_unaligned_le32(data + 8), > + get_unaligned_le32(data + 12), > + key); > + } > + return __hsiphash(data, len, key); > } > > #endif /* _LINUX_SIPHASH_H */ > diff --git a/lib/siphash.c b/lib/siphash.c > index 3ae58b4edad6..3b2ba1a10ad9 100644 > --- a/lib/siphash.c > +++ b/lib/siphash.c > @@ -49,40 +49,7 @@ > SIPROUND; \ > return (v0 ^ v1) ^ (v2 ^ v3); > > -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) > -{ > - const u8 *end = data + len - (len % sizeof(u64)); > - const u8 left = len & (sizeof(u64) - 1); > - u64 m; > - PREAMBLE(len) > - for (; data != end; data += sizeof(u64)) { > - m = le64_to_cpup(data); > - v3 ^= m; > - SIPROUND; > - SIPROUND; > - v0 ^= m; > - } > -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 > - if (left) > - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & > - bytemask_from_count(left))); > -#else > - switch (left) { > - case 7: b |= ((u64)end[6]) << 48; > - case 6: b |= ((u64)end[5]) << 40; > - case 5: b |= ((u64)end[4]) << 32; > - case 4: b |= le32_to_cpup(data); break; > - case 3: b |= ((u64)end[2]) << 16; > - case 2: b |= le16_to_cpup(data); break; > - case 1: b |= end[0]; > - } > -#endif > - POSTAMBLE > -} > -EXPORT_SYMBOL(__siphash_aligned); > - > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) > +u64 __siphash(const void *data, size_t len, const siphash_key_t *key) > { > const u8 *end = data + len - (len % sizeof(u64)); > const u8 left = len & (sizeof(u64) - 1); > @@ -112,8 +79,7 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) > #endif > POSTAMBLE > } > -EXPORT_SYMBOL(__siphash_unaligned); > -#endif > +EXPORT_SYMBOL(__siphash); > > /** > * siphash_1u64 - compute 64-bit siphash PRF value of a u64 > @@ -250,39 +216,7 @@ EXPORT_SYMBOL(siphash_3u32); > HSIPROUND; \ > return (v0 ^ v1) ^ (v2 ^ v3); > > -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) > -{ > - const u8 *end = data + len - (len % sizeof(u64)); > - const u8 left = len & (sizeof(u64) - 1); > - u64 m; > - HPREAMBLE(len) > - for (; data != end; data += sizeof(u64)) { > - m = le64_to_cpup(data); > - v3 ^= m; > - HSIPROUND; > - v0 ^= m; > - } > -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 > - if (left) > - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & > - bytemask_from_count(left))); > -#else > - switch (left) { > - case 7: b |= ((u64)end[6]) << 48; > - case 6: b |= ((u64)end[5]) << 40; > - case 5: b |= ((u64)end[4]) << 32; > - case 4: b |= le32_to_cpup(data); break; > - case 3: b |= ((u64)end[2]) << 16; > - case 2: b |= le16_to_cpup(data); break; > - case 1: b |= end[0]; > - } > -#endif > - HPOSTAMBLE > -} > -EXPORT_SYMBOL(__hsiphash_aligned); > - > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u32 __hsiphash_unaligned(const void *data, size_t len, > +u32 __hsiphash(const void *data, size_t len, > const hsiphash_key_t *key) > { > const u8 *end = data + len - (len % sizeof(u64)); > @@ -312,8 +246,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, > #endif > HPOSTAMBLE > } > -EXPORT_SYMBOL(__hsiphash_unaligned); > -#endif > +EXPORT_SYMBOL(__hsiphash); > > /** > * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 > @@ -418,30 +351,7 @@ EXPORT_SYMBOL(hsiphash_4u32); > HSIPROUND; \ > return v1 ^ v3; > > -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) > -{ > - const u8 *end = data + len - (len % sizeof(u32)); > - const u8 left = len & (sizeof(u32) - 1); > - u32 m; > - HPREAMBLE(len) > - for (; data != end; data += sizeof(u32)) { > - m = le32_to_cpup(data); > - v3 ^= m; > - HSIPROUND; > - v0 ^= m; > - } > - switch (left) { > - case 3: b |= ((u32)end[2]) << 16; > - case 2: b |= le16_to_cpup(data); break; > - case 1: b |= end[0]; > - } > - HPOSTAMBLE > -} > -EXPORT_SYMBOL(__hsiphash_aligned); > - > -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > -u32 __hsiphash_unaligned(const void *data, size_t len, > - const hsiphash_key_t *key) > +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) > { > const u8 *end = data + len - (len % sizeof(u32)); > const u8 left = len & (sizeof(u32) - 1); > @@ -460,8 +370,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, > } > HPOSTAMBLE > } > -EXPORT_SYMBOL(__hsiphash_unaligned); > -#endif > +EXPORT_SYMBOL(__hsiphash); > > /** > * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 > -- > 2.11.0 > As you might expect, when compiling in __siphash_unaligned and __siphash_aligned on the x86 at the same time, __siphash_unaligned is replaced with just "jmp __siphash_aligned", as gcc recognized that indeed the same code is generated. However, on platforms where get_unaligned_* does do something different, it looks to me like this patch now always calls the unaligned code, even when the input data _is_ an aligned address already, which is worse behaviour than before. While it would be possible for the get_unaligned_* function headers to also detect this and fallback to the faster version at compile time, by the time get_unaligned_* is used in this patch, it's no longer in the header, but rather in siphash.c, which means the compiler no longer knows that the address is aligned, and so we hit the slow path. This especially impacts architectures like MIPS, for example. This is why the original code, prior to this patch, checks the alignment in the .h and then selects which codepath afterwards. So while this patch might handle the ARM use case, it seems like a regression on all other platforms. See, for example, the struct passing in net/core/secure_seq.c, which sends intentionally aligned and packed structs to siphash, which then benefits from using the faster instructions on certain platforms. It seems like what you're grappling with on the ARM side of things is that CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only half means what it says on some ISAs, complicating this logic. It seems like the ideal thing to do, given that, would be to just not set CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on those, so that we can fall back to the unaligned path always, like this patch suggests. Or if that's _too_ drastic, perhaps introduce another variable like CONFIG_MOSTLY_EFFICIENT_UNALIGNED_ACCESS. By the way, have you confirmed that the compiler actually does emit ldrd and ldm here? Jason ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/3] crypto: siphash - drop _aligned variants 2018-10-09 4:11 ` Jason A. Donenfeld @ 2018-10-09 5:59 ` Ard Biesheuvel -1 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-09 5:59 UTC (permalink / raw) To: Jason A. Donenfeld Cc: Linux Crypto Mailing List, Herbert Xu, Arnd Bergmann, Eric Biggers, linux-arm-kernel, LKML, linux-mips On 9 October 2018 at 06:11, Jason A. Donenfeld <Jason@zx2c4.com> wrote: > Hi Ard, > > On Mon, Oct 8, 2018 at 11:16 PM Ard Biesheuvel > <ard.biesheuvel@linaro.org> wrote: >> >> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> because the ordinary load/store instructions (ldr, ldrh, ldrb) can >> tolerate any misalignment of the memory address. However, load/store >> double and load/store multiple instructions (ldrd, ldm) may still only >> be used on memory addresses that are 32-bit aligned, and so we have to >> use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we >> may end up with a severe performance hit due to alignment traps that >> require fixups by the kernel. >> >> Fortunately, the get_unaligned() accessors do the right thing: when >> building for ARMv6 or later, the compiler will emit unaligned accesses >> using the ordinary load/store instructions (but avoid the ones that >> require 32-bit alignment). When building for older ARM, those accessors >> will emit the appropriate sequence of ldrb/mov/orr instructions. And on >> architectures that can truly tolerate any kind of misalignment, the >> get_unaligned() accessors resolve to the leXX_to_cpup accessors that >> operate on aligned addresses. >> >> Since the compiler will in fact emit ldrd or ldm instructions when >> building this code for ARM v6 or later, the solution is to use the >> unaligned accessors on the aligned code paths. Given the above, this >> either produces the same code, or better in the ARMv6+ case. However, >> since that removes the only difference between the aligned and unaligned >> variants, we can drop the aligned variant entirely. >> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> --- >> include/linux/siphash.h | 106 +++++++++----------- >> lib/siphash.c | 103 ++----------------- >> 2 files changed, 54 insertions(+), 155 deletions(-) >> >> diff --git a/include/linux/siphash.h b/include/linux/siphash.h >> index fa7a6b9cedbf..ef3c36b0ae0f 100644 >> --- a/include/linux/siphash.h >> +++ b/include/linux/siphash.h >> @@ -15,16 +15,14 @@ >> >> #include <linux/types.h> >> #include <linux/kernel.h> >> +#include <asm/unaligned.h> >> >> #define SIPHASH_ALIGNMENT __alignof__(u64) >> typedef struct { >> u64 key[2]; >> } siphash_key_t; >> >> -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); >> -#endif >> +u64 __siphash(const void *data, size_t len, const siphash_key_t *key); >> >> u64 siphash_1u64(const u64 a, const siphash_key_t *key); >> u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); >> @@ -48,26 +46,6 @@ static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, >> } >> >> >> -static inline u64 ___siphash_aligned(const __le64 *data, size_t len, >> - const siphash_key_t *key) >> -{ >> - if (__builtin_constant_p(len) && len == 4) >> - return siphash_1u32(le32_to_cpup((const __le32 *)data), key); >> - if (__builtin_constant_p(len) && len == 8) >> - return siphash_1u64(le64_to_cpu(data[0]), key); >> - if (__builtin_constant_p(len) && len == 16) >> - return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), >> - key); >> - if (__builtin_constant_p(len) && len == 24) >> - return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), >> - le64_to_cpu(data[2]), key); >> - if (__builtin_constant_p(len) && len == 32) >> - return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), >> - le64_to_cpu(data[2]), le64_to_cpu(data[3]), >> - key); >> - return __siphash_aligned(data, len, key); >> -} >> - >> /** >> * siphash - compute 64-bit siphash PRF value >> * @data: buffer to hash >> @@ -77,11 +55,30 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len, >> static inline u64 siphash(const void *data, size_t len, >> const siphash_key_t *key) >> { >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> - if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) >> - return __siphash_unaligned(data, len, key); >> -#endif >> - return ___siphash_aligned(data, len, key); >> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { >> + if (__builtin_constant_p(len) && len == 4) >> + return siphash_1u32(get_unaligned_le32(data), >> + key); >> + if (__builtin_constant_p(len) && len == 8) >> + return siphash_1u64(get_unaligned_le64(data), >> + key); >> + if (__builtin_constant_p(len) && len == 16) >> + return siphash_2u64(get_unaligned_le64(data), >> + get_unaligned_le64(data + 8), >> + key); >> + if (__builtin_constant_p(len) && len == 24) >> + return siphash_3u64(get_unaligned_le64(data), >> + get_unaligned_le64(data + 8), >> + get_unaligned_le64(data + 16), >> + key); >> + if (__builtin_constant_p(len) && len == 32) >> + return siphash_4u64(get_unaligned_le64(data), >> + get_unaligned_le64(data + 8), >> + get_unaligned_le64(data + 16), >> + get_unaligned_le64(data + 24), >> + key); >> + } >> + return __siphash(data, len, key); >> } >> >> #define HSIPHASH_ALIGNMENT __alignof__(unsigned long) >> @@ -89,12 +86,7 @@ typedef struct { >> unsigned long key[2]; >> } hsiphash_key_t; >> >> -u32 __hsiphash_aligned(const void *data, size_t len, >> - const hsiphash_key_t *key); >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u32 __hsiphash_unaligned(const void *data, size_t len, >> - const hsiphash_key_t *key); >> -#endif >> +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key); >> >> u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); >> u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); >> @@ -103,24 +95,6 @@ u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, >> u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, >> const hsiphash_key_t *key); >> >> -static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, >> - const hsiphash_key_t *key) >> -{ >> - if (__builtin_constant_p(len) && len == 4) >> - return hsiphash_1u32(le32_to_cpu(data[0]), key); >> - if (__builtin_constant_p(len) && len == 8) >> - return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), >> - key); >> - if (__builtin_constant_p(len) && len == 12) >> - return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), >> - le32_to_cpu(data[2]), key); >> - if (__builtin_constant_p(len) && len == 16) >> - return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), >> - le32_to_cpu(data[2]), le32_to_cpu(data[3]), >> - key); >> - return __hsiphash_aligned(data, len, key); >> -} >> - >> /** >> * hsiphash - compute 32-bit hsiphash PRF value >> * @data: buffer to hash >> @@ -130,11 +104,27 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, >> static inline u32 hsiphash(const void *data, size_t len, >> const hsiphash_key_t *key) >> { >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> - if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) >> - return __hsiphash_unaligned(data, len, key); >> -#endif >> - return ___hsiphash_aligned(data, len, key); >> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { >> + if (__builtin_constant_p(len) && len == 4) >> + return hsiphash_1u32(get_unaligned_le32(data), >> + key); >> + if (__builtin_constant_p(len) && len == 8) >> + return hsiphash_2u32(get_unaligned_le32(data), >> + get_unaligned_le32(data + 4), >> + key); >> + if (__builtin_constant_p(len) && len == 12) >> + return hsiphash_3u32(get_unaligned_le32(data), >> + get_unaligned_le32(data + 4), >> + get_unaligned_le32(data + 8), >> + key); >> + if (__builtin_constant_p(len) && len == 16) >> + return hsiphash_4u32(get_unaligned_le32(data), >> + get_unaligned_le32(data + 4), >> + get_unaligned_le32(data + 8), >> + get_unaligned_le32(data + 12), >> + key); >> + } >> + return __hsiphash(data, len, key); >> } >> >> #endif /* _LINUX_SIPHASH_H */ >> diff --git a/lib/siphash.c b/lib/siphash.c >> index 3ae58b4edad6..3b2ba1a10ad9 100644 >> --- a/lib/siphash.c >> +++ b/lib/siphash.c >> @@ -49,40 +49,7 @@ >> SIPROUND; \ >> return (v0 ^ v1) ^ (v2 ^ v3); >> >> -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) >> -{ >> - const u8 *end = data + len - (len % sizeof(u64)); >> - const u8 left = len & (sizeof(u64) - 1); >> - u64 m; >> - PREAMBLE(len) >> - for (; data != end; data += sizeof(u64)) { >> - m = le64_to_cpup(data); >> - v3 ^= m; >> - SIPROUND; >> - SIPROUND; >> - v0 ^= m; >> - } >> -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 >> - if (left) >> - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & >> - bytemask_from_count(left))); >> -#else >> - switch (left) { >> - case 7: b |= ((u64)end[6]) << 48; >> - case 6: b |= ((u64)end[5]) << 40; >> - case 5: b |= ((u64)end[4]) << 32; >> - case 4: b |= le32_to_cpup(data); break; >> - case 3: b |= ((u64)end[2]) << 16; >> - case 2: b |= le16_to_cpup(data); break; >> - case 1: b |= end[0]; >> - } >> -#endif >> - POSTAMBLE >> -} >> -EXPORT_SYMBOL(__siphash_aligned); >> - >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) >> +u64 __siphash(const void *data, size_t len, const siphash_key_t *key) >> { >> const u8 *end = data + len - (len % sizeof(u64)); >> const u8 left = len & (sizeof(u64) - 1); >> @@ -112,8 +79,7 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) >> #endif >> POSTAMBLE >> } >> -EXPORT_SYMBOL(__siphash_unaligned); >> -#endif >> +EXPORT_SYMBOL(__siphash); >> >> /** >> * siphash_1u64 - compute 64-bit siphash PRF value of a u64 >> @@ -250,39 +216,7 @@ EXPORT_SYMBOL(siphash_3u32); >> HSIPROUND; \ >> return (v0 ^ v1) ^ (v2 ^ v3); >> >> -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) >> -{ >> - const u8 *end = data + len - (len % sizeof(u64)); >> - const u8 left = len & (sizeof(u64) - 1); >> - u64 m; >> - HPREAMBLE(len) >> - for (; data != end; data += sizeof(u64)) { >> - m = le64_to_cpup(data); >> - v3 ^= m; >> - HSIPROUND; >> - v0 ^= m; >> - } >> -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 >> - if (left) >> - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & >> - bytemask_from_count(left))); >> -#else >> - switch (left) { >> - case 7: b |= ((u64)end[6]) << 48; >> - case 6: b |= ((u64)end[5]) << 40; >> - case 5: b |= ((u64)end[4]) << 32; >> - case 4: b |= le32_to_cpup(data); break; >> - case 3: b |= ((u64)end[2]) << 16; >> - case 2: b |= le16_to_cpup(data); break; >> - case 1: b |= end[0]; >> - } >> -#endif >> - HPOSTAMBLE >> -} >> -EXPORT_SYMBOL(__hsiphash_aligned); >> - >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u32 __hsiphash_unaligned(const void *data, size_t len, >> +u32 __hsiphash(const void *data, size_t len, >> const hsiphash_key_t *key) >> { >> const u8 *end = data + len - (len % sizeof(u64)); >> @@ -312,8 +246,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, >> #endif >> HPOSTAMBLE >> } >> -EXPORT_SYMBOL(__hsiphash_unaligned); >> -#endif >> +EXPORT_SYMBOL(__hsiphash); >> >> /** >> * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 >> @@ -418,30 +351,7 @@ EXPORT_SYMBOL(hsiphash_4u32); >> HSIPROUND; \ >> return v1 ^ v3; >> >> -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) >> -{ >> - const u8 *end = data + len - (len % sizeof(u32)); >> - const u8 left = len & (sizeof(u32) - 1); >> - u32 m; >> - HPREAMBLE(len) >> - for (; data != end; data += sizeof(u32)) { >> - m = le32_to_cpup(data); >> - v3 ^= m; >> - HSIPROUND; >> - v0 ^= m; >> - } >> - switch (left) { >> - case 3: b |= ((u32)end[2]) << 16; >> - case 2: b |= le16_to_cpup(data); break; >> - case 1: b |= end[0]; >> - } >> - HPOSTAMBLE >> -} >> -EXPORT_SYMBOL(__hsiphash_aligned); >> - >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u32 __hsiphash_unaligned(const void *data, size_t len, >> - const hsiphash_key_t *key) >> +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) >> { >> const u8 *end = data + len - (len % sizeof(u32)); >> const u8 left = len & (sizeof(u32) - 1); >> @@ -460,8 +370,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, >> } >> HPOSTAMBLE >> } >> -EXPORT_SYMBOL(__hsiphash_unaligned); >> -#endif >> +EXPORT_SYMBOL(__hsiphash); >> >> /** >> * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 >> -- >> 2.11.0 >> > > As you might expect, when compiling in __siphash_unaligned and > __siphash_aligned on the x86 at the same time, __siphash_unaligned is > replaced with just "jmp __siphash_aligned", as gcc recognized that > indeed the same code is generated. > Yeah, I noticed something similar on arm64, although we do get a stack frame there. > However, on platforms where get_unaligned_* does do something > different, it looks to me like this patch now always calls the > unaligned code, even when the input data _is_ an aligned address > already, which is worse behaviour than before. While it would be > possible for the get_unaligned_* function headers to also detect this > and fallback to the faster version at compile time, by the time > get_unaligned_* is used in this patch, it's no longer in the header, > but rather in siphash.c, which means the compiler no longer knows that > the address is aligned, and so we hit the slow path. This especially > impacts architectures like MIPS, for example. This is why the original > code, prior to this patch, checks the alignment in the .h and then > selects which codepath afterwards. So while this patch might handle > the ARM use case, it seems like a regression on all other platforms. > See, for example, the struct passing in net/core/secure_seq.c, which > sends intentionally aligned and packed structs to siphash, which then > benefits from using the faster instructions on certain platforms. > > It seems like what you're grappling with on the ARM side of things is > that CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only half means what it > says on some ISAs, complicating this logic. It seems like the ideal > thing to do, given that, would be to just not set > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on those, so that we can fall > back to the unaligned path always, like this patch suggests. Or if > that's _too_ drastic, perhaps introduce another variable like > CONFIG_MOSTLY_EFFICIENT_UNALIGNED_ACCESS. > Perhaps we should clarify better what CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS means. One could argue that it means there is no point in reorganizing your data to make it appear aligned, because the unaligned accessors are cheap. Instead, it is used as a license to cast unaligned pointers to any type (which C does not permit btw), even in the example. So in the case of siphash, that would mean always taking the unaligned path if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set, or only for unaligned data if it is not. > By the way, have you confirmed that the compiler actually does emit > ldrd and ldm here? > Yes. ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 3/3] crypto: siphash - drop _aligned variants @ 2018-10-09 5:59 ` Ard Biesheuvel 0 siblings, 0 replies; 23+ messages in thread From: Ard Biesheuvel @ 2018-10-09 5:59 UTC (permalink / raw) To: linux-arm-kernel On 9 October 2018 at 06:11, Jason A. Donenfeld <Jason@zx2c4.com> wrote: > Hi Ard, > > On Mon, Oct 8, 2018 at 11:16 PM Ard Biesheuvel > <ard.biesheuvel@linaro.org> wrote: >> >> On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> because the ordinary load/store instructions (ldr, ldrh, ldrb) can >> tolerate any misalignment of the memory address. However, load/store >> double and load/store multiple instructions (ldrd, ldm) may still only >> be used on memory addresses that are 32-bit aligned, and so we have to >> use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we >> may end up with a severe performance hit due to alignment traps that >> require fixups by the kernel. >> >> Fortunately, the get_unaligned() accessors do the right thing: when >> building for ARMv6 or later, the compiler will emit unaligned accesses >> using the ordinary load/store instructions (but avoid the ones that >> require 32-bit alignment). When building for older ARM, those accessors >> will emit the appropriate sequence of ldrb/mov/orr instructions. And on >> architectures that can truly tolerate any kind of misalignment, the >> get_unaligned() accessors resolve to the leXX_to_cpup accessors that >> operate on aligned addresses. >> >> Since the compiler will in fact emit ldrd or ldm instructions when >> building this code for ARM v6 or later, the solution is to use the >> unaligned accessors on the aligned code paths. Given the above, this >> either produces the same code, or better in the ARMv6+ case. However, >> since that removes the only difference between the aligned and unaligned >> variants, we can drop the aligned variant entirely. >> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> --- >> include/linux/siphash.h | 106 +++++++++----------- >> lib/siphash.c | 103 ++----------------- >> 2 files changed, 54 insertions(+), 155 deletions(-) >> >> diff --git a/include/linux/siphash.h b/include/linux/siphash.h >> index fa7a6b9cedbf..ef3c36b0ae0f 100644 >> --- a/include/linux/siphash.h >> +++ b/include/linux/siphash.h >> @@ -15,16 +15,14 @@ >> >> #include <linux/types.h> >> #include <linux/kernel.h> >> +#include <asm/unaligned.h> >> >> #define SIPHASH_ALIGNMENT __alignof__(u64) >> typedef struct { >> u64 key[2]; >> } siphash_key_t; >> >> -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); >> -#endif >> +u64 __siphash(const void *data, size_t len, const siphash_key_t *key); >> >> u64 siphash_1u64(const u64 a, const siphash_key_t *key); >> u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); >> @@ -48,26 +46,6 @@ static inline u64 siphash_4u32(const u32 a, const u32 b, const u32 c, >> } >> >> >> -static inline u64 ___siphash_aligned(const __le64 *data, size_t len, >> - const siphash_key_t *key) >> -{ >> - if (__builtin_constant_p(len) && len == 4) >> - return siphash_1u32(le32_to_cpup((const __le32 *)data), key); >> - if (__builtin_constant_p(len) && len == 8) >> - return siphash_1u64(le64_to_cpu(data[0]), key); >> - if (__builtin_constant_p(len) && len == 16) >> - return siphash_2u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), >> - key); >> - if (__builtin_constant_p(len) && len == 24) >> - return siphash_3u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), >> - le64_to_cpu(data[2]), key); >> - if (__builtin_constant_p(len) && len == 32) >> - return siphash_4u64(le64_to_cpu(data[0]), le64_to_cpu(data[1]), >> - le64_to_cpu(data[2]), le64_to_cpu(data[3]), >> - key); >> - return __siphash_aligned(data, len, key); >> -} >> - >> /** >> * siphash - compute 64-bit siphash PRF value >> * @data: buffer to hash >> @@ -77,11 +55,30 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len, >> static inline u64 siphash(const void *data, size_t len, >> const siphash_key_t *key) >> { >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> - if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) >> - return __siphash_unaligned(data, len, key); >> -#endif >> - return ___siphash_aligned(data, len, key); >> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { >> + if (__builtin_constant_p(len) && len == 4) >> + return siphash_1u32(get_unaligned_le32(data), >> + key); >> + if (__builtin_constant_p(len) && len == 8) >> + return siphash_1u64(get_unaligned_le64(data), >> + key); >> + if (__builtin_constant_p(len) && len == 16) >> + return siphash_2u64(get_unaligned_le64(data), >> + get_unaligned_le64(data + 8), >> + key); >> + if (__builtin_constant_p(len) && len == 24) >> + return siphash_3u64(get_unaligned_le64(data), >> + get_unaligned_le64(data + 8), >> + get_unaligned_le64(data + 16), >> + key); >> + if (__builtin_constant_p(len) && len == 32) >> + return siphash_4u64(get_unaligned_le64(data), >> + get_unaligned_le64(data + 8), >> + get_unaligned_le64(data + 16), >> + get_unaligned_le64(data + 24), >> + key); >> + } >> + return __siphash(data, len, key); >> } >> >> #define HSIPHASH_ALIGNMENT __alignof__(unsigned long) >> @@ -89,12 +86,7 @@ typedef struct { >> unsigned long key[2]; >> } hsiphash_key_t; >> >> -u32 __hsiphash_aligned(const void *data, size_t len, >> - const hsiphash_key_t *key); >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u32 __hsiphash_unaligned(const void *data, size_t len, >> - const hsiphash_key_t *key); >> -#endif >> +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key); >> >> u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); >> u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); >> @@ -103,24 +95,6 @@ u32 hsiphash_3u32(const u32 a, const u32 b, const u32 c, >> u32 hsiphash_4u32(const u32 a, const u32 b, const u32 c, const u32 d, >> const hsiphash_key_t *key); >> >> -static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, >> - const hsiphash_key_t *key) >> -{ >> - if (__builtin_constant_p(len) && len == 4) >> - return hsiphash_1u32(le32_to_cpu(data[0]), key); >> - if (__builtin_constant_p(len) && len == 8) >> - return hsiphash_2u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), >> - key); >> - if (__builtin_constant_p(len) && len == 12) >> - return hsiphash_3u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), >> - le32_to_cpu(data[2]), key); >> - if (__builtin_constant_p(len) && len == 16) >> - return hsiphash_4u32(le32_to_cpu(data[0]), le32_to_cpu(data[1]), >> - le32_to_cpu(data[2]), le32_to_cpu(data[3]), >> - key); >> - return __hsiphash_aligned(data, len, key); >> -} >> - >> /** >> * hsiphash - compute 32-bit hsiphash PRF value >> * @data: buffer to hash >> @@ -130,11 +104,27 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len, >> static inline u32 hsiphash(const void *data, size_t len, >> const hsiphash_key_t *key) >> { >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> - if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) >> - return __hsiphash_unaligned(data, len, key); >> -#endif >> - return ___hsiphash_aligned(data, len, key); >> + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) { >> + if (__builtin_constant_p(len) && len == 4) >> + return hsiphash_1u32(get_unaligned_le32(data), >> + key); >> + if (__builtin_constant_p(len) && len == 8) >> + return hsiphash_2u32(get_unaligned_le32(data), >> + get_unaligned_le32(data + 4), >> + key); >> + if (__builtin_constant_p(len) && len == 12) >> + return hsiphash_3u32(get_unaligned_le32(data), >> + get_unaligned_le32(data + 4), >> + get_unaligned_le32(data + 8), >> + key); >> + if (__builtin_constant_p(len) && len == 16) >> + return hsiphash_4u32(get_unaligned_le32(data), >> + get_unaligned_le32(data + 4), >> + get_unaligned_le32(data + 8), >> + get_unaligned_le32(data + 12), >> + key); >> + } >> + return __hsiphash(data, len, key); >> } >> >> #endif /* _LINUX_SIPHASH_H */ >> diff --git a/lib/siphash.c b/lib/siphash.c >> index 3ae58b4edad6..3b2ba1a10ad9 100644 >> --- a/lib/siphash.c >> +++ b/lib/siphash.c >> @@ -49,40 +49,7 @@ >> SIPROUND; \ >> return (v0 ^ v1) ^ (v2 ^ v3); >> >> -u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) >> -{ >> - const u8 *end = data + len - (len % sizeof(u64)); >> - const u8 left = len & (sizeof(u64) - 1); >> - u64 m; >> - PREAMBLE(len) >> - for (; data != end; data += sizeof(u64)) { >> - m = le64_to_cpup(data); >> - v3 ^= m; >> - SIPROUND; >> - SIPROUND; >> - v0 ^= m; >> - } >> -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 >> - if (left) >> - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & >> - bytemask_from_count(left))); >> -#else >> - switch (left) { >> - case 7: b |= ((u64)end[6]) << 48; >> - case 6: b |= ((u64)end[5]) << 40; >> - case 5: b |= ((u64)end[4]) << 32; >> - case 4: b |= le32_to_cpup(data); break; >> - case 3: b |= ((u64)end[2]) << 16; >> - case 2: b |= le16_to_cpup(data); break; >> - case 1: b |= end[0]; >> - } >> -#endif >> - POSTAMBLE >> -} >> -EXPORT_SYMBOL(__siphash_aligned); >> - >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) >> +u64 __siphash(const void *data, size_t len, const siphash_key_t *key) >> { >> const u8 *end = data + len - (len % sizeof(u64)); >> const u8 left = len & (sizeof(u64) - 1); >> @@ -112,8 +79,7 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) >> #endif >> POSTAMBLE >> } >> -EXPORT_SYMBOL(__siphash_unaligned); >> -#endif >> +EXPORT_SYMBOL(__siphash); >> >> /** >> * siphash_1u64 - compute 64-bit siphash PRF value of a u64 >> @@ -250,39 +216,7 @@ EXPORT_SYMBOL(siphash_3u32); >> HSIPROUND; \ >> return (v0 ^ v1) ^ (v2 ^ v3); >> >> -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) >> -{ >> - const u8 *end = data + len - (len % sizeof(u64)); >> - const u8 left = len & (sizeof(u64) - 1); >> - u64 m; >> - HPREAMBLE(len) >> - for (; data != end; data += sizeof(u64)) { >> - m = le64_to_cpup(data); >> - v3 ^= m; >> - HSIPROUND; >> - v0 ^= m; >> - } >> -#if defined(CONFIG_DCACHE_WORD_ACCESS) && BITS_PER_LONG == 64 >> - if (left) >> - b |= le64_to_cpu((__force __le64)(load_unaligned_zeropad(data) & >> - bytemask_from_count(left))); >> -#else >> - switch (left) { >> - case 7: b |= ((u64)end[6]) << 48; >> - case 6: b |= ((u64)end[5]) << 40; >> - case 5: b |= ((u64)end[4]) << 32; >> - case 4: b |= le32_to_cpup(data); break; >> - case 3: b |= ((u64)end[2]) << 16; >> - case 2: b |= le16_to_cpup(data); break; >> - case 1: b |= end[0]; >> - } >> -#endif >> - HPOSTAMBLE >> -} >> -EXPORT_SYMBOL(__hsiphash_aligned); >> - >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u32 __hsiphash_unaligned(const void *data, size_t len, >> +u32 __hsiphash(const void *data, size_t len, >> const hsiphash_key_t *key) >> { >> const u8 *end = data + len - (len % sizeof(u64)); >> @@ -312,8 +246,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, >> #endif >> HPOSTAMBLE >> } >> -EXPORT_SYMBOL(__hsiphash_unaligned); >> -#endif >> +EXPORT_SYMBOL(__hsiphash); >> >> /** >> * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 >> @@ -418,30 +351,7 @@ EXPORT_SYMBOL(hsiphash_4u32); >> HSIPROUND; \ >> return v1 ^ v3; >> >> -u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) >> -{ >> - const u8 *end = data + len - (len % sizeof(u32)); >> - const u8 left = len & (sizeof(u32) - 1); >> - u32 m; >> - HPREAMBLE(len) >> - for (; data != end; data += sizeof(u32)) { >> - m = le32_to_cpup(data); >> - v3 ^= m; >> - HSIPROUND; >> - v0 ^= m; >> - } >> - switch (left) { >> - case 3: b |= ((u32)end[2]) << 16; >> - case 2: b |= le16_to_cpup(data); break; >> - case 1: b |= end[0]; >> - } >> - HPOSTAMBLE >> -} >> -EXPORT_SYMBOL(__hsiphash_aligned); >> - >> -#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS >> -u32 __hsiphash_unaligned(const void *data, size_t len, >> - const hsiphash_key_t *key) >> +u32 __hsiphash(const void *data, size_t len, const hsiphash_key_t *key) >> { >> const u8 *end = data + len - (len % sizeof(u32)); >> const u8 left = len & (sizeof(u32) - 1); >> @@ -460,8 +370,7 @@ u32 __hsiphash_unaligned(const void *data, size_t len, >> } >> HPOSTAMBLE >> } >> -EXPORT_SYMBOL(__hsiphash_unaligned); >> -#endif >> +EXPORT_SYMBOL(__hsiphash); >> >> /** >> * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 >> -- >> 2.11.0 >> > > As you might expect, when compiling in __siphash_unaligned and > __siphash_aligned on the x86 at the same time, __siphash_unaligned is > replaced with just "jmp __siphash_aligned", as gcc recognized that > indeed the same code is generated. > Yeah, I noticed something similar on arm64, although we do get a stack frame there. > However, on platforms where get_unaligned_* does do something > different, it looks to me like this patch now always calls the > unaligned code, even when the input data _is_ an aligned address > already, which is worse behaviour than before. While it would be > possible for the get_unaligned_* function headers to also detect this > and fallback to the faster version at compile time, by the time > get_unaligned_* is used in this patch, it's no longer in the header, > but rather in siphash.c, which means the compiler no longer knows that > the address is aligned, and so we hit the slow path. This especially > impacts architectures like MIPS, for example. This is why the original > code, prior to this patch, checks the alignment in the .h and then > selects which codepath afterwards. So while this patch might handle > the ARM use case, it seems like a regression on all other platforms. > See, for example, the struct passing in net/core/secure_seq.c, which > sends intentionally aligned and packed structs to siphash, which then > benefits from using the faster instructions on certain platforms. > > It seems like what you're grappling with on the ARM side of things is > that CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only half means what it > says on some ISAs, complicating this logic. It seems like the ideal > thing to do, given that, would be to just not set > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on those, so that we can fall > back to the unaligned path always, like this patch suggests. Or if > that's _too_ drastic, perhaps introduce another variable like > CONFIG_MOSTLY_EFFICIENT_UNALIGNED_ACCESS. > Perhaps we should clarify better what CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS means. One could argue that it means there is no point in reorganizing your data to make it appear aligned, because the unaligned accessors are cheap. Instead, it is used as a license to cast unaligned pointers to any type (which C does not permit btw), even in the example. So in the case of siphash, that would mean always taking the unaligned path if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set, or only for unaligned data if it is not. > By the way, have you confirmed that the compiler actually does emit > ldrd and ldm here? > Yes. ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/3] crypto: siphash - drop _aligned variants 2018-10-09 5:59 ` Ard Biesheuvel (?) @ 2018-10-09 6:10 ` Jeffrey Walton -1 siblings, 0 replies; 23+ messages in thread From: Jeffrey Walton @ 2018-10-09 6:10 UTC (permalink / raw) To: Ard Biesheuvel Cc: linux-mips, Jason A. Donenfeld, Herbert Xu, Arnd Bergmann, Eric Biggers, LKML, Linux Crypto Mailing List, linux-arm-kernel On Tue, Oct 9, 2018 at 2:00 AM Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > On 9 October 2018 at 06:11, Jason A. Donenfeld <Jason@zx2c4.com> wrote: > > Hi Ard, > > ... > > As you might expect, when compiling in __siphash_unaligned and > > __siphash_aligned on the x86 at the same time, __siphash_unaligned is > > replaced with just "jmp __siphash_aligned", as gcc recognized that > > indeed the same code is generated. > > > Yeah, I noticed something similar on arm64, although we do get a stack > frame there. > > > However, on platforms where get_unaligned_* does do something > > different, it looks to me like this patch now always calls the > > unaligned code, even when the input data _is_ an aligned address > > already, which is worse behaviour than before. While it would be > > possible for the get_unaligned_* function headers to also detect this > > and fallback to the faster version at compile time, by the time > > get_unaligned_* is used in this patch, it's no longer in the header, > > but rather in siphash.c, which means the compiler no longer knows that > > the address is aligned, and so we hit the slow path. This especially > > impacts architectures like MIPS, for example. This is why the original > > code, prior to this patch, checks the alignment in the .h and then > > selects which codepath afterwards. So while this patch might handle > > the ARM use case, it seems like a regression on all other platforms. > > See, for example, the struct passing in net/core/secure_seq.c, which > > sends intentionally aligned and packed structs to siphash, which then > > benefits from using the faster instructions on certain platforms. > > > > It seems like what you're grappling with on the ARM side of things is > > that CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only half means what it > > says on some ISAs, complicating this logic. It seems like the ideal > > thing to do, given that, would be to just not set > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on those, so that we can fall > > back to the unaligned path always, like this patch suggests. Or if > > that's _too_ drastic, perhaps introduce another variable like > > CONFIG_MOSTLY_EFFICIENT_UNALIGNED_ACCESS. > > > Perhaps we should clarify better what > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS means. > > One could argue that it means there is no point in reorganizing your > data to make it appear aligned, because the unaligned accessors are > cheap. Instead, it is used as a license to cast unaligned pointers to > any type (which C does not permit btw), even in the example. I recommend avoiding this strategy. One of the libraries I help with used a similar strategy and was constantly putting out 1-off fires when GCC assumed, say, 4- or 8-byte alignments. Integer stuff was fine. The problems did not surface until vectorization at -O3 when the misaligned buffers started causing exceptions. To be clear, there were very few problems. It might surface with GCC 4.9 on ARM in one function; and then surface again with GCC 5.1 on x86_64 on another function; and then surface again under Cygwin for another function with GCC 6.3. The pattern was finally gutted in favor of the classic stuff - treat the data unaligned an walk the buffer OR'ing in to a datatype. Or, memcpy it into aligned datatypes. Modern compilers recognize the pattern and it will be optimized they way you hope. Older GCC's, like say, GCC 4.3, may not do as well. But it is the price paid for portability and bug free code. And nowadays those old GCC's and Clang's are getting more rare. There's no sense in doing something quickly if you can't arrive at the correct result or you crash at runtime. > So in the case of siphash, that would mean always taking the unaligned > path if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set, or only for > unaligned data if it is not. Jeff ^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH 3/3] crypto: siphash - drop _aligned variants @ 2018-10-09 6:10 ` Jeffrey Walton 0 siblings, 0 replies; 23+ messages in thread From: Jeffrey Walton @ 2018-10-09 6:10 UTC (permalink / raw) To: linux-arm-kernel On Tue, Oct 9, 2018 at 2:00 AM Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > On 9 October 2018 at 06:11, Jason A. Donenfeld <Jason@zx2c4.com> wrote: > > Hi Ard, > > ... > > As you might expect, when compiling in __siphash_unaligned and > > __siphash_aligned on the x86 at the same time, __siphash_unaligned is > > replaced with just "jmp __siphash_aligned", as gcc recognized that > > indeed the same code is generated. > > > Yeah, I noticed something similar on arm64, although we do get a stack > frame there. > > > However, on platforms where get_unaligned_* does do something > > different, it looks to me like this patch now always calls the > > unaligned code, even when the input data _is_ an aligned address > > already, which is worse behaviour than before. While it would be > > possible for the get_unaligned_* function headers to also detect this > > and fallback to the faster version at compile time, by the time > > get_unaligned_* is used in this patch, it's no longer in the header, > > but rather in siphash.c, which means the compiler no longer knows that > > the address is aligned, and so we hit the slow path. This especially > > impacts architectures like MIPS, for example. This is why the original > > code, prior to this patch, checks the alignment in the .h and then > > selects which codepath afterwards. So while this patch might handle > > the ARM use case, it seems like a regression on all other platforms. > > See, for example, the struct passing in net/core/secure_seq.c, which > > sends intentionally aligned and packed structs to siphash, which then > > benefits from using the faster instructions on certain platforms. > > > > It seems like what you're grappling with on the ARM side of things is > > that CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only half means what it > > says on some ISAs, complicating this logic. It seems like the ideal > > thing to do, given that, would be to just not set > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on those, so that we can fall > > back to the unaligned path always, like this patch suggests. Or if > > that's _too_ drastic, perhaps introduce another variable like > > CONFIG_MOSTLY_EFFICIENT_UNALIGNED_ACCESS. > > > Perhaps we should clarify better what > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS means. > > One could argue that it means there is no point in reorganizing your > data to make it appear aligned, because the unaligned accessors are > cheap. Instead, it is used as a license to cast unaligned pointers to > any type (which C does not permit btw), even in the example. I recommend avoiding this strategy. One of the libraries I help with used a similar strategy and was constantly putting out 1-off fires when GCC assumed, say, 4- or 8-byte alignments. Integer stuff was fine. The problems did not surface until vectorization at -O3 when the misaligned buffers started causing exceptions. To be clear, there were very few problems. It might surface with GCC 4.9 on ARM in one function; and then surface again with GCC 5.1 on x86_64 on another function; and then surface again under Cygwin for another function with GCC 6.3. The pattern was finally gutted in favor of the classic stuff - treat the data unaligned an walk the buffer OR'ing in to a datatype. Or, memcpy it into aligned datatypes. Modern compilers recognize the pattern and it will be optimized they way you hope. Older GCC's, like say, GCC 4.3, may not do as well. But it is the price paid for portability and bug free code. And nowadays those old GCC's and Clang's are getting more rare. There's no sense in doing something quickly if you can't arrive at the correct result or you crash at runtime. > So in the case of siphash, that would mean always taking the unaligned > path if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set, or only for > unaligned data if it is not. Jeff ^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH 3/3] crypto: siphash - drop _aligned variants @ 2018-10-09 6:10 ` Jeffrey Walton 0 siblings, 0 replies; 23+ messages in thread From: Jeffrey Walton @ 2018-10-09 6:10 UTC (permalink / raw) To: Ard Biesheuvel Cc: Jason A. Donenfeld, Linux Crypto Mailing List, Herbert Xu, Arnd Bergmann, Eric Biggers, linux-arm-kernel, LKML, linux-mips On Tue, Oct 9, 2018 at 2:00 AM Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote: > > On 9 October 2018 at 06:11, Jason A. Donenfeld <Jason@zx2c4.com> wrote: > > Hi Ard, > > ... > > As you might expect, when compiling in __siphash_unaligned and > > __siphash_aligned on the x86 at the same time, __siphash_unaligned is > > replaced with just "jmp __siphash_aligned", as gcc recognized that > > indeed the same code is generated. > > > Yeah, I noticed something similar on arm64, although we do get a stack > frame there. > > > However, on platforms where get_unaligned_* does do something > > different, it looks to me like this patch now always calls the > > unaligned code, even when the input data _is_ an aligned address > > already, which is worse behaviour than before. While it would be > > possible for the get_unaligned_* function headers to also detect this > > and fallback to the faster version at compile time, by the time > > get_unaligned_* is used in this patch, it's no longer in the header, > > but rather in siphash.c, which means the compiler no longer knows that > > the address is aligned, and so we hit the slow path. This especially > > impacts architectures like MIPS, for example. This is why the original > > code, prior to this patch, checks the alignment in the .h and then > > selects which codepath afterwards. So while this patch might handle > > the ARM use case, it seems like a regression on all other platforms. > > See, for example, the struct passing in net/core/secure_seq.c, which > > sends intentionally aligned and packed structs to siphash, which then > > benefits from using the faster instructions on certain platforms. > > > > It seems like what you're grappling with on the ARM side of things is > > that CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS only half means what it > > says on some ISAs, complicating this logic. It seems like the ideal > > thing to do, given that, would be to just not set > > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS on those, so that we can fall > > back to the unaligned path always, like this patch suggests. Or if > > that's _too_ drastic, perhaps introduce another variable like > > CONFIG_MOSTLY_EFFICIENT_UNALIGNED_ACCESS. > > > Perhaps we should clarify better what > CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS means. > > One could argue that it means there is no point in reorganizing your > data to make it appear aligned, because the unaligned accessors are > cheap. Instead, it is used as a license to cast unaligned pointers to > any type (which C does not permit btw), even in the example. I recommend avoiding this strategy. One of the libraries I help with used a similar strategy and was constantly putting out 1-off fires when GCC assumed, say, 4- or 8-byte alignments. Integer stuff was fine. The problems did not surface until vectorization at -O3 when the misaligned buffers started causing exceptions. To be clear, there were very few problems. It might surface with GCC 4.9 on ARM in one function; and then surface again with GCC 5.1 on x86_64 on another function; and then surface again under Cygwin for another function with GCC 6.3. The pattern was finally gutted in favor of the classic stuff - treat the data unaligned an walk the buffer OR'ing in to a datatype. Or, memcpy it into aligned datatypes. Modern compilers recognize the pattern and it will be optimized they way you hope. Older GCC's, like say, GCC 4.3, may not do as well. But it is the price paid for portability and bug free code. And nowadays those old GCC's and Clang's are getting more rare. There's no sense in doing something quickly if you can't arrive at the correct result or you crash at runtime. > So in the case of siphash, that would mean always taking the unaligned > path if CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is set, or only for > unaligned data if it is not. Jeff ^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2018-10-09 8:38 UTC | newest] Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-10-08 21:15 [PATCH 0/3] crypto: use unaligned accessors in aligned fast paths Ard Biesheuvel 2018-10-08 21:15 ` Ard Biesheuvel 2018-10-08 21:15 ` [PATCH 1/3] crypto: memneq - use unaligned accessors for aligned fast path Ard Biesheuvel 2018-10-08 21:15 ` Ard Biesheuvel 2018-10-09 3:34 ` Eric Biggers 2018-10-09 3:34 ` Eric Biggers 2018-10-09 6:01 ` Ard Biesheuvel 2018-10-09 6:01 ` Ard Biesheuvel 2018-10-08 21:15 ` [PATCH 2/3] crypto: crypto_xor " Ard Biesheuvel 2018-10-08 21:15 ` Ard Biesheuvel 2018-10-09 3:47 ` Eric Biggers 2018-10-09 3:47 ` Eric Biggers 2018-10-09 8:38 ` Ard Biesheuvel 2018-10-09 8:38 ` Ard Biesheuvel 2018-10-08 21:15 ` [PATCH 3/3] crypto: siphash - drop _aligned variants Ard Biesheuvel 2018-10-08 21:15 ` Ard Biesheuvel 2018-10-09 4:11 ` Jason A. Donenfeld 2018-10-09 4:11 ` Jason A. Donenfeld 2018-10-09 5:59 ` Ard Biesheuvel 2018-10-09 5:59 ` Ard Biesheuvel 2018-10-09 6:10 ` Jeffrey Walton 2018-10-09 6:10 ` Jeffrey Walton 2018-10-09 6:10 ` Jeffrey Walton
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.