llvm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86, mem: move memmove to out of line assembler
@ 2022-09-23 17:02 Nick Desaulniers
  2022-09-23 17:29 ` Linus Torvalds
  0 siblings, 1 reply; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-23 17:02 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen
  Cc: x86, H. Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Ma Ling, Nick Desaulniers

When building ARCH=i386 with CONFIG_LTO_CLANG_FULL=y, it's possible
(depending on additional configs which I have not been able to isolate)
to observe a failure during register allocation:

  error: inline assembly requires more registers than available

when memmove is inlined into tcp_v4_fill_cb() or tcp_v6_fill_cb().

memmove is quite large and probably shouldn't be inlined due to size
alone. A noinline function attribute would be the simplest fix, but
there's a few things that stand out with the current definition:

In addition to having complex constraints that can't always be resolved,
the clobber list seems to be missing %bx and %dx, and possibly %cl. By
using numbered operands rather than symbolic operands, the constraints
are quite obnoxious to refactor.

Having a large function be 99% inline asm is a code smell that this
function should simply be written in stand-alone out-of-line assembler.
That gives the opportunity for other cleanups like fixing the
inconsistent use of tabs vs spaces and instruction suffixes, and the
label 3 appearing twice.  Symbolic operands and local labels would
provide this code with a fresh coat of paint.

Moving this to out of line assembler guarantees that the
compiler cannot inline calls to memmove.

Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
---
An alternative, lower risk approach:
Rather than do all of the above, we could simply provide a macro in the
vain of noinline_for_stack to document that we don't want LTO to inline
a particular function across translation unit boundaries, then apply
that to memmove. Up to the x86 maintainers' risk tolerances.

Also, this leaves arch/x86/lib/memcpy_32.c pretty sparse/nearly empty.
Perhaps I should clean that up here, too? Perhaps in
arch/x86/include/asm/string_32.h? That would probably allow for inlining
calls to memcpy and memset.


 arch/x86/lib/Makefile     |   1 +
 arch/x86/lib/memcpy_32.c  | 187 -----------------------------------
 arch/x86/lib/memmove_32.S | 202 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 203 insertions(+), 187 deletions(-)
 create mode 100644 arch/x86/lib/memmove_32.S

diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index f76747862bd2..9a0b8ed782e2 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -60,6 +60,7 @@ ifeq ($(CONFIG_X86_32),y)
         lib-y += checksum_32.o
         lib-y += strstr_32.o
         lib-y += string_32.o
+        lib-y += memmove_32.o
 ifneq ($(CONFIG_X86_CMPXCHG64),y)
         lib-y += cmpxchg8b_emu.o atomic64_386_32.o
 endif
diff --git a/arch/x86/lib/memcpy_32.c b/arch/x86/lib/memcpy_32.c
index ef3af7ff2c8a..a29b64befb93 100644
--- a/arch/x86/lib/memcpy_32.c
+++ b/arch/x86/lib/memcpy_32.c
@@ -17,190 +17,3 @@ __visible void *memset(void *s, int c, size_t count)
 	return __memset(s, c, count);
 }
 EXPORT_SYMBOL(memset);
-
-__visible void *memmove(void *dest, const void *src, size_t n)
-{
-	int d0,d1,d2,d3,d4,d5;
-	char *ret = dest;
-
-	__asm__ __volatile__(
-		/* Handle more 16 bytes in loop */
-		"cmp $0x10, %0\n\t"
-		"jb	1f\n\t"
-
-		/* Decide forward/backward copy mode */
-		"cmp %2, %1\n\t"
-		"jb	2f\n\t"
-
-		/*
-		 * movs instruction have many startup latency
-		 * so we handle small size by general register.
-		 */
-		"cmp  $680, %0\n\t"
-		"jb 3f\n\t"
-		/*
-		 * movs instruction is only good for aligned case.
-		 */
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 4f\n\t"
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes forward in each loop.
-		 */
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov 2*4(%1), %3\n\t"
-		"mov 3*4(%1), %4\n\t"
-		"mov  %3, 2*4(%2)\n\t"
-		"mov  %4, 3*4(%2)\n\t"
-		"lea  0x10(%1), %1\n\t"
-		"lea  0x10(%2), %2\n\t"
-		"jae 3b\n\t"
-		"add $0x10, %0\n\t"
-		"jmp 1f\n\t"
-
-		/*
-		 * Handle data forward by movs.
-		 */
-		".p2align 4\n\t"
-		"4:\n\t"
-		"mov -4(%1, %0), %3\n\t"
-		"lea -4(%2, %0), %4\n\t"
-		"shr $2, %0\n\t"
-		"rep movsl\n\t"
-		"mov %3, (%4)\n\t"
-		"jmp 11f\n\t"
-		/*
-		 * Handle data backward by movs.
-		 */
-		".p2align 4\n\t"
-		"6:\n\t"
-		"mov (%1), %3\n\t"
-		"mov %2, %4\n\t"
-		"lea -4(%1, %0), %1\n\t"
-		"lea -4(%2, %0), %2\n\t"
-		"shr $2, %0\n\t"
-		"std\n\t"
-		"rep movsl\n\t"
-		"mov %3,(%4)\n\t"
-		"cld\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Start to prepare for backward copy.
-		 */
-		".p2align 4\n\t"
-		"2:\n\t"
-		"cmp  $680, %0\n\t"
-		"jb 5f\n\t"
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 6b\n\t"
-
-		/*
-		 * Calculate copy position to tail.
-		 */
-		"5:\n\t"
-		"add %0, %1\n\t"
-		"add %0, %2\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes backward in each loop.
-		 */
-		"7:\n\t"
-		"sub $0x10, %0\n\t"
-
-		"mov -1*4(%1), %3\n\t"
-		"mov -2*4(%1), %4\n\t"
-		"mov  %3, -1*4(%2)\n\t"
-		"mov  %4, -2*4(%2)\n\t"
-		"mov -3*4(%1), %3\n\t"
-		"mov -4*4(%1), %4\n\t"
-		"mov  %3, -3*4(%2)\n\t"
-		"mov  %4, -4*4(%2)\n\t"
-		"lea  -0x10(%1), %1\n\t"
-		"lea  -0x10(%2), %2\n\t"
-		"jae 7b\n\t"
-		/*
-		 * Calculate copy position to head.
-		 */
-		"add $0x10, %0\n\t"
-		"sub %0, %1\n\t"
-		"sub %0, %2\n\t"
-
-		/*
-		 * Move data from 8 bytes to 15 bytes.
-		 */
-		".p2align 4\n\t"
-		"1:\n\t"
-		"cmp $8, %0\n\t"
-		"jb 8f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov -2*4(%1, %0), %5\n\t"
-		"mov -1*4(%1, %0), %1\n\t"
-
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov  %5, -2*4(%2, %0)\n\t"
-		"mov  %1, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 4 bytes to 7 bytes.
-		 */
-		".p2align 4\n\t"
-		"8:\n\t"
-		"cmp $4, %0\n\t"
-		"jb 9f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov -1*4(%1, %0), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 2 bytes to 3 bytes.
-		 */
-		".p2align 4\n\t"
-		"9:\n\t"
-		"cmp $2, %0\n\t"
-		"jb 10f\n\t"
-		"movw 0*2(%1), %%dx\n\t"
-		"movw -1*2(%1, %0), %%bx\n\t"
-		"movw %%dx, 0*2(%2)\n\t"
-		"movw %%bx, -1*2(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data for 1 byte.
-		 */
-		".p2align 4\n\t"
-		"10:\n\t"
-		"cmp $1, %0\n\t"
-		"jb 11f\n\t"
-		"movb (%1), %%cl\n\t"
-		"movb %%cl, (%2)\n\t"
-		".p2align 4\n\t"
-		"11:"
-		: "=&c" (d0), "=&S" (d1), "=&D" (d2),
-		  "=r" (d3),"=r" (d4), "=r"(d5)
-		:"0" (n),
-		 "1" (src),
-		 "2" (dest)
-		:"memory");
-
-	return ret;
-
-}
-EXPORT_SYMBOL(memmove);
diff --git a/arch/x86/lib/memmove_32.S b/arch/x86/lib/memmove_32.S
new file mode 100644
index 000000000000..096feb154f64
--- /dev/null
+++ b/arch/x86/lib/memmove_32.S
@@ -0,0 +1,202 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/linkage.h>
+
+SYM_FUNC_START(memmove)
+/*
+ * void *memmove(void *dest, const void *src, size_t n)
+ * -mregparm=3 passes these in registers:
+ */
+.set dest, %eax
+.set src, %edx
+.set n, %ecx
+
+/* Need 3 scratch registers. These need to be saved+restored. */
+.set tmp0, %edi
+.set tmp1, %ebx
+.set tmp2, %esi
+
+	pushl	%ebp
+	movl	%esp, %ebp
+
+	pushl	dest
+	pushl	tmp0
+	pushl	tmp1
+	pushl	tmp2
+
+	/* Handle more 16 bytes in loop */
+	cmpl	$0x10, n
+	jb	.L16_byteswap
+
+	/* Decide forward/backward copy mode */
+	cmpl	dest, src
+	jb	.Lbackwards_header
+
+	/*
+	 * movs instruction have many startup latency
+	 * so we handle small size by general register.
+	 */
+	cmpl	$680, n
+	jb	.Ltoo_small_forwards
+	/*
+	 * movs instruction is only good for aligned case.
+	 */
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lforward_movs
+.Ltoo_small_forwards:
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes forward in each loop.
+	 */
+.L16_byteswap_forwards_loop:
+	subl	$0x10, n
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	2*4(src), tmp0
+	movl	3*4(src), tmp1
+	movl	tmp0, 2*4(dest)
+	movl	tmp1, 3*4(dest)
+	leal	0x10(src), src
+	leal	0x10(dest), dest
+	jae	.L16_byteswap_forwards_loop
+	addl	$0x10, n
+	jmp	.L16_byteswap
+
+	/*
+	 * Handle data forward by movs.
+	 */
+.p2align 4
+.Lforward_movs:
+	movl	-4(src, n), tmp0
+	leal	-4(dest, n), tmp1
+	shrl	$2, n
+	rep	movsl
+	movl	tmp0, (tmp1)
+	jmp	.Ldone
+	/*
+	 * Handle data backward by movs.
+	 */
+.p2align 4
+.Lbackwards_movs:
+	movl	(src), tmp0
+	movl	dest, tmp1
+	leal	-4(src, n), src
+	leal	-4(dest, n), dest
+	shrl	$2, n
+	std
+	rep	movsl
+	movl	tmp0,(tmp1)
+	cld
+	jmp	.Ldone
+
+	/*
+	 * Start to prepare for backward copy.
+	 */
+.p2align 4
+.Lbackwards_header:
+	cmpl	$680, n
+	jb	.Ltoo_small_backwards
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lbackwards_movs
+
+	/*
+	 * Calculate copy position to tail.
+	 */
+.Ltoo_small_backwards:
+	addl	n, src
+	addl	n, dest
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes backward in each loop.
+	 */
+.L16_byteswap_backwards_loop:
+	subl	$0x10, n
+
+	movl	-1*4(src), tmp0
+	movl	-2*4(src), tmp1
+	movl	tmp0, -1*4(dest)
+	movl	tmp1, -2*4(dest)
+	movl	-3*4(src), tmp0
+	movl	-4*4(src), tmp1
+	movl	tmp0, -3*4(dest)
+	movl	tmp1, -4*4(dest)
+	leal	-0x10(src), src
+	leal	-0x10(dest), dest
+	jae	.L16_byteswap_backwards_loop
+	/*
+	 * Calculate copy position to head.
+	 */
+	addl	$0x10, n
+	subl	n, src
+	subl	n, dest
+
+	/*
+	 * Move data from 8 bytes to 15 bytes.
+	 */
+.p2align 4
+.L16_byteswap:
+	cmpl	$8, n
+	jb	.L8_byteswap
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	-2*4(src, n), tmp2
+	movl	-1*4(src, n), src
+
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	tmp2, -2*4(dest, n)
+	movl	src, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 4 bytes to 7 bytes.
+	 */
+.p2align 4
+.L8_byteswap:
+	cmpl	$4, n
+	jb	.L4_byteswap
+	movl	0*4(src), tmp0
+	movl	-1*4(src, n), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 2 bytes to 3 bytes.
+	 */
+.p2align 4
+.L4_byteswap:
+	cmpl	$2, n
+	jb	.Lbyteswap
+	movw	0*2(src), %di
+	movw	-1*2(src, n), %bx
+	movw	%dx, 0*2(dest)
+	movw	%bx, -1*2(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data for 1 byte.
+	 */
+.p2align 4
+.Lbyteswap:
+	cmpl	$1, n
+	jb	.Ldone
+	movb	(src), %cl
+	movb	%cl, (dest)
+.p2align 4
+.Ldone:
+	popl	tmp2
+	popl	tmp1
+	popl	tmp0
+	popl	%eax
+	popl	%ebp
+	RET
+SYM_FUNC_END(memmove)
-- 
2.37.3.998.g577e59143f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH] x86, mem: move memmove to out of line assembler
  2022-09-23 17:02 [PATCH] x86, mem: move memmove to out of line assembler Nick Desaulniers
@ 2022-09-23 17:29 ` Linus Torvalds
  2022-09-23 17:55   ` Nick Desaulniers
  0 siblings, 1 reply; 26+ messages in thread
From: Linus Torvalds @ 2022-09-23 17:29 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel, llvm,
	Andy Lutomirski, Ma Ling

On Fri, Sep 23, 2022 at 10:02 AM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> memmove is quite large and probably shouldn't be inlined due to size
> alone. A noinline function attribute would be the simplest fix, but
> there's a few things that stand out with the current definition:

I don't think your patch is wrong, and it's not that different from
what the x86-64 code did long ago back in 2011 in commit 9599ec0471de
("x86-64, mem: Convert memmove() to assembly file and fix return value
bug")

But I wonder if we might be better off getting rid of that horrid
memmove thing entirely. The original thing seems to be from 2010,
where commit 3b4b682becdf ("x86, mem: Optimize memmove for small size
and unaligned cases") did this thing for both 32-bit and 64-bit code.

And it's really not likely used all that much - memcpy is the *much*
more important function, and that's the one that has actually been
updated for modern CPU's instead of looking like it's some copy from
glibc or whatever.

To make things even stranger, on the x86-64 side, we actually have
*two* copies of 'memmove()' - it looks like memcpy_orig() is already a
memmove, in that it does that

        cmp  %dil, %sil
        jl .Lcopy_backward

thing that seems to mean that it already handles the overlapping case.

Anyway, that 32-bit memmove() implemented (badly) as inline asm does
need to go. As you point out, it seems to get the clobbers wrong too,
so it's actively buggy and just happens to work because there's
nothing else in that function, and it clobbers %bx that is supposed to
be callee-saved, but *presumably* the compile has to allocate %bx one
of the other registers, so it will save and restore %bx anyway.

So that current memmove() seems to be truly horrendously buggy, but in
a "it happens to work" way. Your patch seems an improvement, but I get
the feeling that it could be improved a lot more...

              Linus

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] x86, mem: move memmove to out of line assembler
  2022-09-23 17:29 ` Linus Torvalds
@ 2022-09-23 17:55   ` Nick Desaulniers
  2022-09-23 18:05     ` Linus Torvalds
  0 siblings, 1 reply; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-23 17:55 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel, llvm,
	Andy Lutomirski

Dropping Ma, emails bouncing.

On Fri, Sep 23, 2022 at 10:30 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Fri, Sep 23, 2022 at 10:02 AM Nick Desaulniers
> <ndesaulniers@google.com> wrote:
> >
> > memmove is quite large and probably shouldn't be inlined due to size
> > alone. A noinline function attribute would be the simplest fix, but
> > there's a few things that stand out with the current definition:
>
> I don't think your patch is wrong, and it's not that different from
> what the x86-64 code did long ago back in 2011 in commit 9599ec0471de
> ("x86-64, mem: Convert memmove() to assembly file and fix return value
> bug")
>
> But I wonder if we might be better off getting rid of that horrid
> memmove thing entirely.

We could remove __HAVE_ARCH_MEMMOVE from
arch/x86/include/asm/string_32.h for ARCH=i386 then rip this
arch-specific definition of memmove out.

Might performance regressions be a concern with that approach?

I'll write up a patch for that just to have on hand, and leave the
decision up to someone else.

> The original thing seems to be from 2010,
> where commit 3b4b682becdf ("x86, mem: Optimize memmove for small size
> and unaligned cases") did this thing for both 32-bit and 64-bit code.
>
> And it's really not likely used all that much - memcpy is the *much*
> more important function, and that's the one that has actually been
> updated for modern CPU's instead of looking like it's some copy from
> glibc or whatever.

I suspect that's probably where the duplicate 3 label comes from:
likely some macros were expanded from another codebase's
implementation, then copied into the kernel sources.

>
> To make things even stranger, on the x86-64 side, we actually have
> *two* copies of 'memmove()' - it looks like memcpy_orig() is already a
> memmove, in that it does that
>
>         cmp  %dil, %sil
>         jl .Lcopy_backward
>
> thing that seems to mean that it already handles the overlapping case.
>
> Anyway, that 32-bit memmove() implemented (badly) as inline asm does
> need to go. As you point out, it seems to get the clobbers wrong too,
> so it's actively buggy and just happens to work because there's
> nothing else in that function, and it clobbers %bx that is supposed to
> be callee-saved, but *presumably* the compile has to allocate %bx one
> of the other registers, so it will save and restore %bx anyway.
>
> So that current memmove() seems to be truly horrendously buggy, but in
> a "it happens to work" way. Your patch seems an improvement, but I get
> the feeling that it could be improved a lot more...
-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] x86, mem: move memmove to out of line assembler
  2022-09-23 17:55   ` Nick Desaulniers
@ 2022-09-23 18:05     ` Linus Torvalds
  2022-09-27 17:03       ` Nick Desaulniers
  0 siblings, 1 reply; 26+ messages in thread
From: Linus Torvalds @ 2022-09-23 18:05 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel, llvm,
	Andy Lutomirski

On Fri, Sep 23, 2022 at 10:55 AM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> We could remove __HAVE_ARCH_MEMMOVE from
> arch/x86/include/asm/string_32.h for ARCH=i386 then rip this
> arch-specific definition of memmove out.
>
> Might performance regressions be a concern with that approach?

memmove() isn't particularly common, but it does happen for some paths
that can be hot - the usual case of moving parts of an array around. I
see filesystems and networking paths doing that.

The generic memmove() is a horrendous byte-at-a-time thing and only
good for bring-up of new architectures. That's not an option.

But I'm looking at that x86-64 memcpy_orig, and I think it looks
fairly good as a template for doing the same on x86-32. And we could
get rid of the duplication on the x86-64 side.

That said, your patch looks fine too, as a "minimal changes" thing.

                Linus

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] x86, mem: move memmove to out of line assembler
  2022-09-23 18:05     ` Linus Torvalds
@ 2022-09-27 17:03       ` Nick Desaulniers
  2022-09-27 17:28         ` [PATCH v2] " Nick Desaulniers
  0 siblings, 1 reply; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-27 17:03 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H. Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel, llvm,
	Andy Lutomirski

Oh, yeah my patch essentially _is_
commit 9599ec0471de ("x86-64, mem: Convert memmove() to assembly file
and fix return value bug")
but for 32b (and no return value bug).  I should probably amend a
reference to that in the commit message for this patch.

Also, I'm missing an EXPORT_SYMBOL in my v1, so modules that reference
memmove will fail to build during modpost. v2 is required.

On Fri, Sep 23, 2022 at 11:06 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> But I'm looking at that x86-64 memcpy_orig, and I think it looks
> fairly good as a template for doing the same on x86-32. And we could
> get rid of the duplication on the x86-64 side.

Is the suggestion that 64b memcpy_orig could be replaced with __memmove?

Sorry, I'm not sure I follow either suggestions for code reuse opportunities.

Also, any ideas which machines for QEMU don't have ERMS for testing
these non-ERMS implementations?

>
> That said, your patch looks fine too, as a "minimal changes" thing.
>
>                 Linus

--
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2] x86, mem: move memmove to out of line assembler
  2022-09-27 17:03       ` Nick Desaulniers
@ 2022-09-27 17:28         ` Nick Desaulniers
  2022-09-27 18:41           ` Kees Cook
  2022-09-27 19:23           ` Kees Cook
  0 siblings, 2 replies; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-27 17:28 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen
  Cc: x86, H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Nick Desaulniers

When building ARCH=i386 with CONFIG_LTO_CLANG_FULL=y, it's possible
(depending on additional configs which I have not been able to isolate)
to observe a failure during register allocation:

  error: inline assembly requires more registers than available

when memmove is inlined into tcp_v4_fill_cb() or tcp_v6_fill_cb().

memmove is quite large and probably shouldn't be inlined due to size
alone. A noinline function attribute would be the simplest fix, but
there's a few things that stand out with the current definition:

In addition to having complex constraints that can't always be resolved,
the clobber list seems to be missing %bx and %dx, and possibly %cl. By
using numbered operands rather than symbolic operands, the constraints
are quite obnoxious to refactor.

Having a large function be 99% inline asm is a code smell that this
function should simply be written in stand-alone out-of-line assembler.
That gives the opportunity for other cleanups like fixing the
inconsistent use of tabs vs spaces and instruction suffixes, and the
label 3 appearing twice.  Symbolic operands and local labels would
provide this code with a fresh coat of paint.

Moving this to out of line assembler guarantees that the compiler cannot
inline calls to memmove.

This has been done previously for 64b:
commit 9599ec0471de ("x86-64, mem: Convert memmove() to assembly file
and fix return value bug")

Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
---
Changes v1 -> v2:
* Add reference to 9599ec0471de in commit message.
* Include asm/export.h then make sure to EXPORT_SYMBOL(memmove).

 arch/x86/lib/Makefile     |   1 +
 arch/x86/lib/memcpy_32.c  | 187 ----------------------------------
 arch/x86/lib/memmove_32.S | 204 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 205 insertions(+), 187 deletions(-)
 create mode 100644 arch/x86/lib/memmove_32.S

diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index f76747862bd2..9a0b8ed782e2 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -60,6 +60,7 @@ ifeq ($(CONFIG_X86_32),y)
         lib-y += checksum_32.o
         lib-y += strstr_32.o
         lib-y += string_32.o
+        lib-y += memmove_32.o
 ifneq ($(CONFIG_X86_CMPXCHG64),y)
         lib-y += cmpxchg8b_emu.o atomic64_386_32.o
 endif
diff --git a/arch/x86/lib/memcpy_32.c b/arch/x86/lib/memcpy_32.c
index ef3af7ff2c8a..a29b64befb93 100644
--- a/arch/x86/lib/memcpy_32.c
+++ b/arch/x86/lib/memcpy_32.c
@@ -17,190 +17,3 @@ __visible void *memset(void *s, int c, size_t count)
 	return __memset(s, c, count);
 }
 EXPORT_SYMBOL(memset);
-
-__visible void *memmove(void *dest, const void *src, size_t n)
-{
-	int d0,d1,d2,d3,d4,d5;
-	char *ret = dest;
-
-	__asm__ __volatile__(
-		/* Handle more 16 bytes in loop */
-		"cmp $0x10, %0\n\t"
-		"jb	1f\n\t"
-
-		/* Decide forward/backward copy mode */
-		"cmp %2, %1\n\t"
-		"jb	2f\n\t"
-
-		/*
-		 * movs instruction have many startup latency
-		 * so we handle small size by general register.
-		 */
-		"cmp  $680, %0\n\t"
-		"jb 3f\n\t"
-		/*
-		 * movs instruction is only good for aligned case.
-		 */
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 4f\n\t"
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes forward in each loop.
-		 */
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov 2*4(%1), %3\n\t"
-		"mov 3*4(%1), %4\n\t"
-		"mov  %3, 2*4(%2)\n\t"
-		"mov  %4, 3*4(%2)\n\t"
-		"lea  0x10(%1), %1\n\t"
-		"lea  0x10(%2), %2\n\t"
-		"jae 3b\n\t"
-		"add $0x10, %0\n\t"
-		"jmp 1f\n\t"
-
-		/*
-		 * Handle data forward by movs.
-		 */
-		".p2align 4\n\t"
-		"4:\n\t"
-		"mov -4(%1, %0), %3\n\t"
-		"lea -4(%2, %0), %4\n\t"
-		"shr $2, %0\n\t"
-		"rep movsl\n\t"
-		"mov %3, (%4)\n\t"
-		"jmp 11f\n\t"
-		/*
-		 * Handle data backward by movs.
-		 */
-		".p2align 4\n\t"
-		"6:\n\t"
-		"mov (%1), %3\n\t"
-		"mov %2, %4\n\t"
-		"lea -4(%1, %0), %1\n\t"
-		"lea -4(%2, %0), %2\n\t"
-		"shr $2, %0\n\t"
-		"std\n\t"
-		"rep movsl\n\t"
-		"mov %3,(%4)\n\t"
-		"cld\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Start to prepare for backward copy.
-		 */
-		".p2align 4\n\t"
-		"2:\n\t"
-		"cmp  $680, %0\n\t"
-		"jb 5f\n\t"
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 6b\n\t"
-
-		/*
-		 * Calculate copy position to tail.
-		 */
-		"5:\n\t"
-		"add %0, %1\n\t"
-		"add %0, %2\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes backward in each loop.
-		 */
-		"7:\n\t"
-		"sub $0x10, %0\n\t"
-
-		"mov -1*4(%1), %3\n\t"
-		"mov -2*4(%1), %4\n\t"
-		"mov  %3, -1*4(%2)\n\t"
-		"mov  %4, -2*4(%2)\n\t"
-		"mov -3*4(%1), %3\n\t"
-		"mov -4*4(%1), %4\n\t"
-		"mov  %3, -3*4(%2)\n\t"
-		"mov  %4, -4*4(%2)\n\t"
-		"lea  -0x10(%1), %1\n\t"
-		"lea  -0x10(%2), %2\n\t"
-		"jae 7b\n\t"
-		/*
-		 * Calculate copy position to head.
-		 */
-		"add $0x10, %0\n\t"
-		"sub %0, %1\n\t"
-		"sub %0, %2\n\t"
-
-		/*
-		 * Move data from 8 bytes to 15 bytes.
-		 */
-		".p2align 4\n\t"
-		"1:\n\t"
-		"cmp $8, %0\n\t"
-		"jb 8f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov -2*4(%1, %0), %5\n\t"
-		"mov -1*4(%1, %0), %1\n\t"
-
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov  %5, -2*4(%2, %0)\n\t"
-		"mov  %1, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 4 bytes to 7 bytes.
-		 */
-		".p2align 4\n\t"
-		"8:\n\t"
-		"cmp $4, %0\n\t"
-		"jb 9f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov -1*4(%1, %0), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 2 bytes to 3 bytes.
-		 */
-		".p2align 4\n\t"
-		"9:\n\t"
-		"cmp $2, %0\n\t"
-		"jb 10f\n\t"
-		"movw 0*2(%1), %%dx\n\t"
-		"movw -1*2(%1, %0), %%bx\n\t"
-		"movw %%dx, 0*2(%2)\n\t"
-		"movw %%bx, -1*2(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data for 1 byte.
-		 */
-		".p2align 4\n\t"
-		"10:\n\t"
-		"cmp $1, %0\n\t"
-		"jb 11f\n\t"
-		"movb (%1), %%cl\n\t"
-		"movb %%cl, (%2)\n\t"
-		".p2align 4\n\t"
-		"11:"
-		: "=&c" (d0), "=&S" (d1), "=&D" (d2),
-		  "=r" (d3),"=r" (d4), "=r"(d5)
-		:"0" (n),
-		 "1" (src),
-		 "2" (dest)
-		:"memory");
-
-	return ret;
-
-}
-EXPORT_SYMBOL(memmove);
diff --git a/arch/x86/lib/memmove_32.S b/arch/x86/lib/memmove_32.S
new file mode 100644
index 000000000000..73314a391a72
--- /dev/null
+++ b/arch/x86/lib/memmove_32.S
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/linkage.h>
+#include <asm/export.h>
+
+SYM_FUNC_START(memmove)
+/*
+ * void *memmove(void *dest, const void *src, size_t n)
+ * -mregparm=3 passes these in registers:
+ */
+.set dest, %eax
+.set src, %edx
+.set n, %ecx
+
+/* Need 3 scratch registers. These need to be saved+restored. */
+.set tmp0, %edi
+.set tmp1, %ebx
+.set tmp2, %esi
+
+	pushl	%ebp
+	movl	%esp, %ebp
+
+	pushl	dest
+	pushl	tmp0
+	pushl	tmp1
+	pushl	tmp2
+
+	/* Handle more 16 bytes in loop */
+	cmpl	$0x10, n
+	jb	.L16_byteswap
+
+	/* Decide forward/backward copy mode */
+	cmpl	dest, src
+	jb	.Lbackwards_header
+
+	/*
+	 * movs instruction have many startup latency
+	 * so we handle small size by general register.
+	 */
+	cmpl	$680, n
+	jb	.Ltoo_small_forwards
+	/*
+	 * movs instruction is only good for aligned case.
+	 */
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lforward_movs
+.Ltoo_small_forwards:
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes forward in each loop.
+	 */
+.L16_byteswap_forwards_loop:
+	subl	$0x10, n
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	2*4(src), tmp0
+	movl	3*4(src), tmp1
+	movl	tmp0, 2*4(dest)
+	movl	tmp1, 3*4(dest)
+	leal	0x10(src), src
+	leal	0x10(dest), dest
+	jae	.L16_byteswap_forwards_loop
+	addl	$0x10, n
+	jmp	.L16_byteswap
+
+	/*
+	 * Handle data forward by movs.
+	 */
+.p2align 4
+.Lforward_movs:
+	movl	-4(src, n), tmp0
+	leal	-4(dest, n), tmp1
+	shrl	$2, n
+	rep	movsl
+	movl	tmp0, (tmp1)
+	jmp	.Ldone
+	/*
+	 * Handle data backward by movs.
+	 */
+.p2align 4
+.Lbackwards_movs:
+	movl	(src), tmp0
+	movl	dest, tmp1
+	leal	-4(src, n), src
+	leal	-4(dest, n), dest
+	shrl	$2, n
+	std
+	rep	movsl
+	movl	tmp0,(tmp1)
+	cld
+	jmp	.Ldone
+
+	/*
+	 * Start to prepare for backward copy.
+	 */
+.p2align 4
+.Lbackwards_header:
+	cmpl	$680, n
+	jb	.Ltoo_small_backwards
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lbackwards_movs
+
+	/*
+	 * Calculate copy position to tail.
+	 */
+.Ltoo_small_backwards:
+	addl	n, src
+	addl	n, dest
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes backward in each loop.
+	 */
+.L16_byteswap_backwards_loop:
+	subl	$0x10, n
+
+	movl	-1*4(src), tmp0
+	movl	-2*4(src), tmp1
+	movl	tmp0, -1*4(dest)
+	movl	tmp1, -2*4(dest)
+	movl	-3*4(src), tmp0
+	movl	-4*4(src), tmp1
+	movl	tmp0, -3*4(dest)
+	movl	tmp1, -4*4(dest)
+	leal	-0x10(src), src
+	leal	-0x10(dest), dest
+	jae	.L16_byteswap_backwards_loop
+	/*
+	 * Calculate copy position to head.
+	 */
+	addl	$0x10, n
+	subl	n, src
+	subl	n, dest
+
+	/*
+	 * Move data from 8 bytes to 15 bytes.
+	 */
+.p2align 4
+.L16_byteswap:
+	cmpl	$8, n
+	jb	.L8_byteswap
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	-2*4(src, n), tmp2
+	movl	-1*4(src, n), src
+
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	tmp2, -2*4(dest, n)
+	movl	src, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 4 bytes to 7 bytes.
+	 */
+.p2align 4
+.L8_byteswap:
+	cmpl	$4, n
+	jb	.L4_byteswap
+	movl	0*4(src), tmp0
+	movl	-1*4(src, n), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 2 bytes to 3 bytes.
+	 */
+.p2align 4
+.L4_byteswap:
+	cmpl	$2, n
+	jb	.Lbyteswap
+	movw	0*2(src), %di
+	movw	-1*2(src, n), %bx
+	movw	%dx, 0*2(dest)
+	movw	%bx, -1*2(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data for 1 byte.
+	 */
+.p2align 4
+.Lbyteswap:
+	cmpl	$1, n
+	jb	.Ldone
+	movb	(src), %cl
+	movb	%cl, (dest)
+.p2align 4
+.Ldone:
+	popl	tmp2
+	popl	tmp1
+	popl	tmp0
+	popl	%eax
+	popl	%ebp
+	RET
+SYM_FUNC_END(memmove)
+EXPORT_SYMBOL(memmove)
-- 
2.37.3.998.g577e59143f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] x86, mem: move memmove to out of line assembler
  2022-09-27 17:28         ` [PATCH v2] " Nick Desaulniers
@ 2022-09-27 18:41           ` Kees Cook
  2022-09-27 19:23           ` Kees Cook
  1 sibling, 0 replies; 26+ messages in thread
From: Kees Cook @ 2022-09-27 18:41 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, linux-kernel, Linus Torvalds,
	llvm, Andy Lutomirski

On Tue, Sep 27, 2022 at 10:28:39AM -0700, Nick Desaulniers wrote:
> When building ARCH=i386 with CONFIG_LTO_CLANG_FULL=y, it's possible
> (depending on additional configs which I have not been able to isolate)
> to observe a failure during register allocation:
>
>   error: inline assembly requires more registers than available
>
> when memmove is inlined into tcp_v4_fill_cb() or tcp_v6_fill_cb().
>
> memmove is quite large and probably shouldn't be inlined due to size
> alone. A noinline function attribute would be the simplest fix, but
> there's a few things that stand out with the current definition:
>
> In addition to having complex constraints that can't always be resolved,
> the clobber list seems to be missing %bx and %dx, and possibly %cl. By
> using numbered operands rather than symbolic operands, the constraints
> are quite obnoxious to refactor.
>
> Having a large function be 99% inline asm is a code smell that this
> function should simply be written in stand-alone out-of-line assembler.
> That gives the opportunity for other cleanups like fixing the
> inconsistent use of tabs vs spaces and instruction suffixes, and the
> label 3 appearing twice.  Symbolic operands and local labels would
> provide this code with a fresh coat of paint.
>
> Moving this to out of line assembler guarantees that the compiler cannot
> inline calls to memmove.
>
> This has been done previously for 64b:
> commit 9599ec0471de ("x86-64, mem: Convert memmove() to assembly file
> and fix return value bug")
>
> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>

Unfortunately, it seems something has gone wrong with this
implementation. Before the patch:

$ ./tools/testing/kunit/kunit.py run --arch=i386 memcpy
...
[11:26:24] [PASSED] memmove_test
...

After the patch:

$ ./tools/testing/kunit/kunit.py run --arch=i386 memcpy
...
[11:25:59] # memmove_test: ok: memmove() static initializers
[11:25:59] # memmove_test: ok: memmove() direct assignment
[11:25:59] # memmove_test: ok: memmove() complete overwrite
[11:25:59] # memmove_test: ok: memmove() middle overwrite
[11:25:59] # memmove_test: EXPECTATION FAILED at lib/memcpy_kunit.c:176
[11:25:59] Expected dest.data[i] == five.data[i], but
[11:25:59] dest.data[i] == 136
[11:25:59] five.data[i] == 0
[11:25:59] line 176: dest.data[10] (0x88) != five.data[10] (0x00)
[11:25:59] # memmove_test: ok: memmove() argument side-effects
[11:25:59] # memmove_test: ok: memmove() overlapping wr\xf0te
[11:25:59] not ok 3 - memmove_test
[11:25:59] [FAILED] memmove_test
...

data[10] starts set as 0x99, and in theory gets 0x0 written to it, but
the self-test sees 0x88 there. (?!) It seems the macro side-effect test
caught something else entirely?

-Kees

--
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] x86, mem: move memmove to out of line assembler
  2022-09-27 17:28         ` [PATCH v2] " Nick Desaulniers
  2022-09-27 18:41           ` Kees Cook
@ 2022-09-27 19:23           ` Kees Cook
  2022-09-27 20:01             ` Nick Desaulniers
  1 sibling, 1 reply; 26+ messages in thread
From: Kees Cook @ 2022-09-27 19:23 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, linux-kernel, Linus Torvalds,
	llvm, Andy Lutomirski

On Tue, Sep 27, 2022 at 10:28:39AM -0700, Nick Desaulniers wrote:
> In addition to having complex constraints that can't always be resolved,
> the clobber list seems to be missing %bx and %dx, and possibly %cl. By
> using numbered operands rather than symbolic operands, the constraints
> are quite obnoxious to refactor.
> [...]
> -		/*
> -		 * Move data from 2 bytes to 3 bytes.
> -		 */
> -		".p2align 4\n\t"
> -		"9:\n\t"
> -		"cmp $2, %0\n\t"
> -		"jb 10f\n\t"
> -		"movw 0*2(%1), %%dx\n\t"
> -		"movw -1*2(%1, %0), %%bx\n\t"
> -		"movw %%dx, 0*2(%2)\n\t"
> -		"movw %%bx, -1*2(%2, %0)\n\t"
> -		"jmp 11f\n\t"
> [...]
> +.set tmp0, %edi
> [...]
> +	/*
> +	 * Move data from 2 bytes to 3 bytes.
> +	 */
> +.p2align 4
> +.L4_byteswap:
> +	cmpl	$2, n
> +	jb	.Lbyteswap
> +	movw	0*2(src), %di
> +	movw	-1*2(src, n), %bx
> +	movw	%dx, 0*2(dest)
> +	movw	%bx, -1*2(dest, n)
> +	jmp	.Ldone

Found it (need to use %di instead of %dx). With this changed, the kunit
test passes again:

diff --git a/arch/x86/lib/memmove_32.S b/arch/x86/lib/memmove_32.S
index 73314a391a72..9e33c9a1c595 100644
--- a/arch/x86/lib/memmove_32.S
+++ b/arch/x86/lib/memmove_32.S
@@ -179,7 +179,7 @@ SYM_FUNC_START(memmove)
 	jb	.Lbyteswap
 	movw	0*2(src), %di
 	movw	-1*2(src, n), %bx
-	movw	%dx, 0*2(dest)
+	movw	%di, 0*2(dest)
 	movw	%bx, -1*2(dest, n)
 	jmp	.Ldone

-Kees

-- 
Kees Cook

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] x86, mem: move memmove to out of line assembler
  2022-09-27 19:23           ` Kees Cook
@ 2022-09-27 20:01             ` Nick Desaulniers
  2022-09-27 20:36               ` Kees Cook
  0 siblings, 1 reply; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-27 20:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, linux-kernel, Linus Torvalds,
	llvm, Andy Lutomirski

On Tue, Sep 27, 2022 at 12:24 PM Kees Cook <keescook@chromium.org> wrote:
>
> On Tue, Sep 27, 2022 at 10:28:39AM -0700, Nick Desaulniers wrote:
> > In addition to having complex constraints that can't always be resolved,
> > the clobber list seems to be missing %bx and %dx, and possibly %cl. By
> > using numbered operands rather than symbolic operands, the constraints
> > are quite obnoxious to refactor.
> > [...]
> > -             /*
> > -              * Move data from 2 bytes to 3 bytes.
> > -              */
> > -             ".p2align 4\n\t"
> > -             "9:\n\t"
> > -             "cmp $2, %0\n\t"
> > -             "jb 10f\n\t"
> > -             "movw 0*2(%1), %%dx\n\t"
> > -             "movw -1*2(%1, %0), %%bx\n\t"
> > -             "movw %%dx, 0*2(%2)\n\t"
> > -             "movw %%bx, -1*2(%2, %0)\n\t"
> > -             "jmp 11f\n\t"
> > [...]
> > +.set tmp0, %edi
> > [...]
> > +     /*
> > +      * Move data from 2 bytes to 3 bytes.
> > +      */
> > +.p2align 4
> > +.L4_byteswap:
> > +     cmpl    $2, n
> > +     jb      .Lbyteswap
> > +     movw    0*2(src), %di
> > +     movw    -1*2(src, n), %bx
> > +     movw    %dx, 0*2(dest)
> > +     movw    %bx, -1*2(dest, n)
> > +     jmp     .Ldone
>
> Found it (need to use %di instead of %dx). With this changed, the kunit
> test passes again:
>
> diff --git a/arch/x86/lib/memmove_32.S b/arch/x86/lib/memmove_32.S
> index 73314a391a72..9e33c9a1c595 100644
> --- a/arch/x86/lib/memmove_32.S
> +++ b/arch/x86/lib/memmove_32.S
> @@ -179,7 +179,7 @@ SYM_FUNC_START(memmove)
>         jb      .Lbyteswap
>         movw    0*2(src), %di
>         movw    -1*2(src, n), %bx
> -       movw    %dx, 0*2(dest)
> +       movw    %di, 0*2(dest)
>         movw    %bx, -1*2(dest, n)
>         jmp     .Ldone

That was stupid of me, I updated the scratch register operand in the
instruction 2 before the one at issue and forgot to update the operand
for the register in question, breaking the swap.

I'll use .set directives to give these more descriptive names to avoid
such a mistake.

.set tmp0w %di

then use tmp0w everywhere.  I will give names to the remaining
register operands, too.

Off thread, can you show me how to run the kunit tests, please?

Will send a v3 shortly.

>
> -Kees
>
> --
> Kees Cook



-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] x86, mem: move memmove to out of line assembler
  2022-09-27 20:01             ` Nick Desaulniers
@ 2022-09-27 20:36               ` Kees Cook
  2022-09-27 21:02                 ` [PATCH v3] " Nick Desaulniers
  0 siblings, 1 reply; 26+ messages in thread
From: Kees Cook @ 2022-09-27 20:36 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, linux-kernel, Linus Torvalds,
	llvm, Andy Lutomirski

On Tue, Sep 27, 2022 at 01:01:35PM -0700, Nick Desaulniers wrote:
> > Found it (need to use %di instead of %dx). With this changed, the kunit
> > test passes again:
> >
> 
> That was stupid of me, I updated the scratch register operand in the
> instruction 2 before the one at issue and forgot to update the operand
> for the register in question, breaking the swap.

No worries! It meant I got to do a very careful review of the port. :)

> Off thread, can you show me how to run the kunit tests, please?

It's literally just what I put in the original email. KUnit is super
easy to use. In the root of the kernel source tree, just run:

	./tools/testing/kunit/kunit.py run --arch=i386 memcpy

Without --arch, it'll default to um.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3] x86, mem: move memmove to out of line assembler
  2022-09-27 20:36               ` Kees Cook
@ 2022-09-27 21:02                 ` Nick Desaulniers
  2022-09-27 21:14                   ` Kees Cook
  2022-09-28  7:24                   ` Rasmus Villemoes
  0 siblings, 2 replies; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-27 21:02 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen
  Cc: x86, H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Nick Desaulniers

When building ARCH=i386 with CONFIG_LTO_CLANG_FULL=y, it's possible
(depending on additional configs which I have not been able to isolate)
to observe a failure during register allocation:

  error: inline assembly requires more registers than available

when memmove is inlined into tcp_v4_fill_cb() or tcp_v6_fill_cb().

memmove is quite large and probably shouldn't be inlined due to size
alone. A noinline function attribute would be the simplest fix, but
there's a few things that stand out with the current definition:

In addition to having complex constraints that can't always be resolved,
the clobber list seems to be missing %bx and %dx, and possibly %cl. By
using numbered operands rather than symbolic operands, the constraints
are quite obnoxious to refactor.

Having a large function be 99% inline asm is a code smell that this
function should simply be written in stand-alone out-of-line assembler.
That gives the opportunity for other cleanups like fixing the
inconsistent use of tabs vs spaces and instruction suffixes, and the
label 3 appearing twice.  Symbolic operands and local labels would
provide this code with a fresh coat of paint.

Moving this to out of line assembler guarantees that the
compiler cannot inline calls to memmove.

This has been done previously for 64b:
commit 9599ec0471de ("x86-64, mem: Convert memmove() to assembly file
and fix return value bug")

Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
---
Changes v2 -> v3:
* Fix bug I introduced in v1 when I changed one of temp register
  operands for one of the two instructions performing a mem to mem swap,
  but not the other instruction's operand, and discovered by Kees.
  Verified KUnit memcpy tests are passing via:
  $ ./tools/testing/kunit/kunit.py run --arch=i386 --make_options LLVM=1
  $ ./tools/testing/kunit/kunit.py run --arch=i386
  Fixed by using symbolic identifiers rather than open coded registers
  for the less-than-word-size temporary registers.
* Expand the comment about callee saved registers on i386 with a
  reference to the psABI.

Changes v1 -> v2:
* Add reference to 9599ec0471de in commit message.
* Include asm/export.h then make sure to EXPORT_SYMBOL(memmove).

 arch/x86/lib/Makefile     |   1 +
 arch/x86/lib/memcpy_32.c  | 187 ---------------------------------
 arch/x86/lib/memmove_32.S | 215 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 216 insertions(+), 187 deletions(-)
 create mode 100644 arch/x86/lib/memmove_32.S

diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index f76747862bd2..9a0b8ed782e2 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -60,6 +60,7 @@ ifeq ($(CONFIG_X86_32),y)
         lib-y += checksum_32.o
         lib-y += strstr_32.o
         lib-y += string_32.o
+        lib-y += memmove_32.o
 ifneq ($(CONFIG_X86_CMPXCHG64),y)
         lib-y += cmpxchg8b_emu.o atomic64_386_32.o
 endif
diff --git a/arch/x86/lib/memcpy_32.c b/arch/x86/lib/memcpy_32.c
index ef3af7ff2c8a..a29b64befb93 100644
--- a/arch/x86/lib/memcpy_32.c
+++ b/arch/x86/lib/memcpy_32.c
@@ -17,190 +17,3 @@ __visible void *memset(void *s, int c, size_t count)
 	return __memset(s, c, count);
 }
 EXPORT_SYMBOL(memset);
-
-__visible void *memmove(void *dest, const void *src, size_t n)
-{
-	int d0,d1,d2,d3,d4,d5;
-	char *ret = dest;
-
-	__asm__ __volatile__(
-		/* Handle more 16 bytes in loop */
-		"cmp $0x10, %0\n\t"
-		"jb	1f\n\t"
-
-		/* Decide forward/backward copy mode */
-		"cmp %2, %1\n\t"
-		"jb	2f\n\t"
-
-		/*
-		 * movs instruction have many startup latency
-		 * so we handle small size by general register.
-		 */
-		"cmp  $680, %0\n\t"
-		"jb 3f\n\t"
-		/*
-		 * movs instruction is only good for aligned case.
-		 */
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 4f\n\t"
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes forward in each loop.
-		 */
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov 2*4(%1), %3\n\t"
-		"mov 3*4(%1), %4\n\t"
-		"mov  %3, 2*4(%2)\n\t"
-		"mov  %4, 3*4(%2)\n\t"
-		"lea  0x10(%1), %1\n\t"
-		"lea  0x10(%2), %2\n\t"
-		"jae 3b\n\t"
-		"add $0x10, %0\n\t"
-		"jmp 1f\n\t"
-
-		/*
-		 * Handle data forward by movs.
-		 */
-		".p2align 4\n\t"
-		"4:\n\t"
-		"mov -4(%1, %0), %3\n\t"
-		"lea -4(%2, %0), %4\n\t"
-		"shr $2, %0\n\t"
-		"rep movsl\n\t"
-		"mov %3, (%4)\n\t"
-		"jmp 11f\n\t"
-		/*
-		 * Handle data backward by movs.
-		 */
-		".p2align 4\n\t"
-		"6:\n\t"
-		"mov (%1), %3\n\t"
-		"mov %2, %4\n\t"
-		"lea -4(%1, %0), %1\n\t"
-		"lea -4(%2, %0), %2\n\t"
-		"shr $2, %0\n\t"
-		"std\n\t"
-		"rep movsl\n\t"
-		"mov %3,(%4)\n\t"
-		"cld\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Start to prepare for backward copy.
-		 */
-		".p2align 4\n\t"
-		"2:\n\t"
-		"cmp  $680, %0\n\t"
-		"jb 5f\n\t"
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 6b\n\t"
-
-		/*
-		 * Calculate copy position to tail.
-		 */
-		"5:\n\t"
-		"add %0, %1\n\t"
-		"add %0, %2\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes backward in each loop.
-		 */
-		"7:\n\t"
-		"sub $0x10, %0\n\t"
-
-		"mov -1*4(%1), %3\n\t"
-		"mov -2*4(%1), %4\n\t"
-		"mov  %3, -1*4(%2)\n\t"
-		"mov  %4, -2*4(%2)\n\t"
-		"mov -3*4(%1), %3\n\t"
-		"mov -4*4(%1), %4\n\t"
-		"mov  %3, -3*4(%2)\n\t"
-		"mov  %4, -4*4(%2)\n\t"
-		"lea  -0x10(%1), %1\n\t"
-		"lea  -0x10(%2), %2\n\t"
-		"jae 7b\n\t"
-		/*
-		 * Calculate copy position to head.
-		 */
-		"add $0x10, %0\n\t"
-		"sub %0, %1\n\t"
-		"sub %0, %2\n\t"
-
-		/*
-		 * Move data from 8 bytes to 15 bytes.
-		 */
-		".p2align 4\n\t"
-		"1:\n\t"
-		"cmp $8, %0\n\t"
-		"jb 8f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov -2*4(%1, %0), %5\n\t"
-		"mov -1*4(%1, %0), %1\n\t"
-
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov  %5, -2*4(%2, %0)\n\t"
-		"mov  %1, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 4 bytes to 7 bytes.
-		 */
-		".p2align 4\n\t"
-		"8:\n\t"
-		"cmp $4, %0\n\t"
-		"jb 9f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov -1*4(%1, %0), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 2 bytes to 3 bytes.
-		 */
-		".p2align 4\n\t"
-		"9:\n\t"
-		"cmp $2, %0\n\t"
-		"jb 10f\n\t"
-		"movw 0*2(%1), %%dx\n\t"
-		"movw -1*2(%1, %0), %%bx\n\t"
-		"movw %%dx, 0*2(%2)\n\t"
-		"movw %%bx, -1*2(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data for 1 byte.
-		 */
-		".p2align 4\n\t"
-		"10:\n\t"
-		"cmp $1, %0\n\t"
-		"jb 11f\n\t"
-		"movb (%1), %%cl\n\t"
-		"movb %%cl, (%2)\n\t"
-		".p2align 4\n\t"
-		"11:"
-		: "=&c" (d0), "=&S" (d1), "=&D" (d2),
-		  "=r" (d3),"=r" (d4), "=r"(d5)
-		:"0" (n),
-		 "1" (src),
-		 "2" (dest)
-		:"memory");
-
-	return ret;
-
-}
-EXPORT_SYMBOL(memmove);
diff --git a/arch/x86/lib/memmove_32.S b/arch/x86/lib/memmove_32.S
new file mode 100644
index 000000000000..146664b7eb92
--- /dev/null
+++ b/arch/x86/lib/memmove_32.S
@@ -0,0 +1,215 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/linkage.h>
+#include <asm/export.h>
+
+SYM_FUNC_START(memmove)
+/*
+ * void *memmove(void *dest, const void *src, size_t n)
+ * -mregparm=3 passes these in registers:
+ */
+.set dest, %eax
+.set src, %edx
+.set n, %ecx
+
+/*
+ * Need 3 scratch registers. These need to be saved+restored. Section 3.2.1
+ * Footnote 7 of the System V Application Binary Interface Version 1.0 aka
+ * "psABI" notes:
+ *   Note that in contrast to the Intel386 ABI, %rdi, and %rsi belong to the
+ *   called function, not the caller.
+ * i.e. %edi and %esi are callee saved for i386 (because they belong to the
+ * caller).
+ */
+.set tmp0, %edi
+.set tmp0w, %di
+.set tmp1, %ebx
+.set tmp1w, %bx
+.set tmp2, %esi
+.set tmp3b, %cl
+
+	pushl	%ebp
+	movl	%esp, %ebp
+
+	pushl	dest
+	pushl	tmp0
+	pushl	tmp1
+	pushl	tmp2
+
+	/* Handle more 16 bytes in loop */
+	cmpl	$0x10, n
+	jb	.L16_byteswap
+
+	/* Decide forward/backward copy mode */
+	cmpl	dest, src
+	jb	.Lbackwards_header
+
+	/*
+	 * movs instruction have many startup latency
+	 * so we handle small size by general register.
+	 */
+	cmpl	$680, n
+	jb	.Ltoo_small_forwards
+	/*
+	 * movs instruction is only good for aligned case.
+	 */
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lforward_movs
+.Ltoo_small_forwards:
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes forward in each loop.
+	 */
+.L16_byteswap_forwards_loop:
+	subl	$0x10, n
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	2*4(src), tmp0
+	movl	3*4(src), tmp1
+	movl	tmp0, 2*4(dest)
+	movl	tmp1, 3*4(dest)
+	leal	0x10(src), src
+	leal	0x10(dest), dest
+	jae	.L16_byteswap_forwards_loop
+	addl	$0x10, n
+	jmp	.L16_byteswap
+
+	/*
+	 * Handle data forward by movs.
+	 */
+.p2align 4
+.Lforward_movs:
+	movl	-4(src, n), tmp0
+	leal	-4(dest, n), tmp1
+	shrl	$2, n
+	rep	movsl
+	movl	tmp0, (tmp1)
+	jmp	.Ldone
+	/*
+	 * Handle data backward by movs.
+	 */
+.p2align 4
+.Lbackwards_movs:
+	movl	(src), tmp0
+	movl	dest, tmp1
+	leal	-4(src, n), src
+	leal	-4(dest, n), dest
+	shrl	$2, n
+	std
+	rep	movsl
+	movl	tmp0,(tmp1)
+	cld
+	jmp	.Ldone
+
+	/*
+	 * Start to prepare for backward copy.
+	 */
+.p2align 4
+.Lbackwards_header:
+	cmpl	$680, n
+	jb	.Ltoo_small_backwards
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lbackwards_movs
+
+	/*
+	 * Calculate copy position to tail.
+	 */
+.Ltoo_small_backwards:
+	addl	n, src
+	addl	n, dest
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes backward in each loop.
+	 */
+.L16_byteswap_backwards_loop:
+	subl	$0x10, n
+
+	movl	-1*4(src), tmp0
+	movl	-2*4(src), tmp1
+	movl	tmp0, -1*4(dest)
+	movl	tmp1, -2*4(dest)
+	movl	-3*4(src), tmp0
+	movl	-4*4(src), tmp1
+	movl	tmp0, -3*4(dest)
+	movl	tmp1, -4*4(dest)
+	leal	-0x10(src), src
+	leal	-0x10(dest), dest
+	jae	.L16_byteswap_backwards_loop
+	/*
+	 * Calculate copy position to head.
+	 */
+	addl	$0x10, n
+	subl	n, src
+	subl	n, dest
+
+	/*
+	 * Move data from 8 bytes to 15 bytes.
+	 */
+.p2align 4
+.L16_byteswap:
+	cmpl	$8, n
+	jb	.L8_byteswap
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	-2*4(src, n), tmp2
+	movl	-1*4(src, n), src
+
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	tmp2, -2*4(dest, n)
+	movl	src, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 4 bytes to 7 bytes.
+	 */
+.p2align 4
+.L8_byteswap:
+	cmpl	$4, n
+	jb	.L4_byteswap
+	movl	0*4(src), tmp0
+	movl	-1*4(src, n), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 2 bytes to 3 bytes.
+	 */
+.p2align 4
+.L4_byteswap:
+	cmpl	$2, n
+	jb	.Lbyteswap
+	movw	0*2(src), tmp0w
+	movw	-1*2(src, n), tmp1w
+	movw	tmp0w, 0*2(dest)
+	movw	tmp1w, -1*2(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data for 1 byte.
+	 */
+.p2align 4
+.Lbyteswap:
+	cmpl	$1, n
+	jb	.Ldone
+	movb	(src), tmp3b
+	movb	tmp3b, (dest)
+.p2align 4
+.Ldone:
+	popl	tmp2
+	popl	tmp1
+	popl	tmp0
+	popl	%eax
+	popl	%ebp
+	RET
+SYM_FUNC_END(memmove)
+EXPORT_SYMBOL(memmove)
-- 
2.37.3.998.g577e59143f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] x86, mem: move memmove to out of line assembler
  2022-09-27 21:02                 ` [PATCH v3] " Nick Desaulniers
@ 2022-09-27 21:14                   ` Kees Cook
  2022-09-28  7:24                   ` Rasmus Villemoes
  1 sibling, 0 replies; 26+ messages in thread
From: Kees Cook @ 2022-09-27 21:14 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, linux-kernel, Linus Torvalds,
	llvm, Andy Lutomirski

On Tue, Sep 27, 2022 at 02:02:48PM -0700, Nick Desaulniers wrote:
> [...]
> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
> ---
> Changes v2 -> v3:
> * Fix bug I introduced in v1 when I changed one of temp register
>   operands for one of the two instructions performing a mem to mem swap,
>   but not the other instruction's operand, and discovered by Kees.
>   Verified KUnit memcpy tests are passing via:
>   $ ./tools/testing/kunit/kunit.py run --arch=i386 --make_options LLVM=1
>   $ ./tools/testing/kunit/kunit.py run --arch=i386
>   Fixed by using symbolic identifiers rather than open coded registers
>   for the less-than-word-size temporary registers.
> * Expand the comment about callee saved registers on i386 with a
>   reference to the psABI.

Thanks! This looks good.

Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] x86, mem: move memmove to out of line assembler
  2022-09-27 21:02                 ` [PATCH v3] " Nick Desaulniers
  2022-09-27 21:14                   ` Kees Cook
@ 2022-09-28  7:24                   ` Rasmus Villemoes
  2022-09-28 19:00                     ` Linus Torvalds
  2022-09-28 19:06                     ` Nick Desaulniers
  1 sibling, 2 replies; 26+ messages in thread
From: Rasmus Villemoes @ 2022-09-28  7:24 UTC (permalink / raw)
  To: Nick Desaulniers, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen
  Cc: x86, H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski

On 27/09/2022 23.02, Nick Desaulniers wrote:

> +	/* Decide forward/backward copy mode */
> +	cmpl	dest, src
> +	jb	.Lbackwards_header

I know you're mostly just moving existing code, but for my own education
I'd like to understand this.

> +	/*
> +	 * movs instruction have many startup latency
> +	 * so we handle small size by general register.
> +	 */
> +	cmpl	$680, n
> +	jb	.Ltoo_small_forwards

OK, this I get, there's some overhead, and hence we need _some_ cutoff
value; 680 is probably chosen by some trial-and-error, but the exact
value likely doesn't matter too much.

> +	/*
> +	 * movs instruction is only good for aligned case.
> +	 */
> +	movl	src, tmp0
> +	xorl	dest, tmp0
> +	andl	$0xff, tmp0
> +	jz	.Lforward_movs

But this part I don't understand at all. This checks that the src and
dest have the same %256 value, which is a rather odd thing, and very
unlikely to ever be hit in practice. I could understand if it checked
that they were both 4 or 8 or 16-byte aligned (i.e., (src|dest)&FOO)),
or if it checked that they had the same offset within a cacheline [say
(src^dest)&0x3f].

Any idea where that comes from? Or am I just incapable of reading x86 asm?

> +.Ltoo_small_forwards:
> +	subl	$0x10, n
> +
> +	/*
> +	 * We gobble 16 bytes forward in each loop.
> +	 */
> +.L16_byteswap_forwards_loop:
> +	subl	$0x10, n
> +	movl	0*4(src), tmp0
> +	movl	1*4(src), tmp1
> +	movl	tmp0, 0*4(dest)
> +	movl	tmp1, 1*4(dest)
> +	movl	2*4(src), tmp0
> +	movl	3*4(src), tmp1
> +	movl	tmp0, 2*4(dest)
> +	movl	tmp1, 3*4(dest)
> +	leal	0x10(src), src
> +	leal	0x10(dest), dest
> +	jae	.L16_byteswap_forwards_loop
> +	addl	$0x10, n
> +	jmp	.L16_byteswap
> +
> +	/*
> +	 * Handle data forward by movs.
> +	 */
> +.p2align 4
> +.Lforward_movs:
> +	movl	-4(src, n), tmp0
> +	leal	-4(dest, n), tmp1
> +	shrl	$2, n
> +	rep	movsl
> +	movl	tmp0, (tmp1)
> +	jmp	.Ldone

So in the original code, %1 was forced to be %esi and %2 was forced to
be %edi and they were initialized by src and dest. But here I fail to
see how those registers have been properly set up before the rep movs;
your names for those are tmp0 and tmp2. You have just loaded the last
word of the source to %edi, and AFAICT %esi aka tmp2 is entirely
uninitialized at this point (the only use is in L16_byteswap).

I must be missing something. Please enlighten me.

Rasmus

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] x86, mem: move memmove to out of line assembler
  2022-09-28  7:24                   ` Rasmus Villemoes
@ 2022-09-28 19:00                     ` Linus Torvalds
  2022-09-28 19:06                     ` Nick Desaulniers
  1 sibling, 0 replies; 26+ messages in thread
From: Linus Torvalds @ 2022-09-28 19:00 UTC (permalink / raw)
  To: Rasmus Villemoes
  Cc: Nick Desaulniers, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H . Peter Anvin, Peter Zijlstra, Kees Cook,
	linux-kernel, llvm, Andy Lutomirski

On Wed, Sep 28, 2022 at 12:24 AM Rasmus Villemoes
<linux@rasmusvillemoes.dk> wrote:
>
> > +     /*
> > +      * movs instruction have many startup latency
> > +      * so we handle small size by general register.
> > +      */
> > +     cmpl    $680, n
> > +     jb      .Ltoo_small_forwards
>
> OK, this I get, there's some overhead, and hence we need _some_ cutoff
> value; 680 is probably chosen by some trial-and-error, but the exact
> value likely doesn't matter too much.
>
> > +     /*
> > +      * movs instruction is only good for aligned case.
> > +      */
> > +     movl    src, tmp0
> > +     xorl    dest, tmp0
> > +     andl    $0xff, tmp0
> > +     jz      .Lforward_movs
>
> But this part I don't understand at all. This checks that the src and
> dest have the same %256 value, which is a rather odd thing,

Both of these checks basically reflect the time the original code was
added, back in 2011, and are basically "that was the "rep movs
implementation of the time".

Neither of them is very relevant today, and not the right way to check
anyway (ie FSRM should replace that test for 680 bytes etc).

But fixing the code to check the right things should probably be a
separate issue from the "move from inline asm to explicit asm", so I
think the patch is right this way.

                    Linus

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] x86, mem: move memmove to out of line assembler
  2022-09-28  7:24                   ` Rasmus Villemoes
  2022-09-28 19:00                     ` Linus Torvalds
@ 2022-09-28 19:06                     ` Nick Desaulniers
  2022-09-28 20:49                       ` Nick Desaulniers
  1 sibling, 1 reply; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-28 19:06 UTC (permalink / raw)
  To: Rasmus Villemoes
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski

On Wed, Sep 28, 2022 at 12:24 AM Rasmus Villemoes
<linux@rasmusvillemoes.dk> wrote:
>
> On 27/09/2022 23.02, Nick Desaulniers wrote:
>
> > +     /* Decide forward/backward copy mode */
> > +     cmpl    dest, src
> > +     jb      .Lbackwards_header
>
> I know you're mostly just moving existing code, but for my own education
> I'd like to understand this.
>
> > +     /*
> > +      * movs instruction have many startup latency
> > +      * so we handle small size by general register.
> > +      */
> > +     cmpl    $680, n
> > +     jb      .Ltoo_small_forwards
>
> OK, this I get, there's some overhead, and hence we need _some_ cutoff
> value; 680 is probably chosen by some trial-and-error, but the exact
> value likely doesn't matter too much.

__memmove in arch/x86/lib/memmove_64.S uses the same value.
But I assume this is precisely why FSRM was created.
https://www.phoronix.com/news/Intel-5.6-FSRM-Memmove
commit f444a5ff95dc ("x86/cpufeatures: Add support for fast short REP; MOVSB")

>
> > +     /*
> > +      * movs instruction is only good for aligned case.
> > +      */
> > +     movl    src, tmp0
> > +     xorl    dest, tmp0
> > +     andl    $0xff, tmp0
> > +     jz      .Lforward_movs
>
> But this part I don't understand at all. This checks that the src and
> dest have the same %256 value, which is a rather odd thing, and very
> unlikely to ever be hit in practice. I could understand if it checked
> that they were both 4 or 8 or 16-byte aligned (i.e., (src|dest)&FOO)),
> or if it checked that they had the same offset within a cacheline [say
> (src^dest)&0x3f].
>
> Any idea where that comes from? Or am I just incapable of reading x86 asm?

So I think the above is roughly:
if ((src ^ dest) & 0xff == 0)
  goto .Lforward_movs;

So if src or dest don't have the same bottom byte value, don't use movs?

>
> > +.Ltoo_small_forwards:
> > +     subl    $0x10, n
> > +
> > +     /*
> > +      * We gobble 16 bytes forward in each loop.
> > +      */
> > +.L16_byteswap_forwards_loop:
> > +     subl    $0x10, n
> > +     movl    0*4(src), tmp0
> > +     movl    1*4(src), tmp1
> > +     movl    tmp0, 0*4(dest)
> > +     movl    tmp1, 1*4(dest)
> > +     movl    2*4(src), tmp0
> > +     movl    3*4(src), tmp1
> > +     movl    tmp0, 2*4(dest)
> > +     movl    tmp1, 3*4(dest)
> > +     leal    0x10(src), src
> > +     leal    0x10(dest), dest
> > +     jae     .L16_byteswap_forwards_loop
> > +     addl    $0x10, n
> > +     jmp     .L16_byteswap
> > +
> > +     /*
> > +      * Handle data forward by movs.
> > +      */
> > +.p2align 4
> > +.Lforward_movs:
> > +     movl    -4(src, n), tmp0
> > +     leal    -4(dest, n), tmp1
> > +     shrl    $2, n
> > +     rep     movsl
> > +     movl    tmp0, (tmp1)
> > +     jmp     .Ldone
>
> So in the original code, %1 was forced to be %esi and %2 was forced to
> be %edi and they were initialized by src and dest. But here I fail to
> see how those registers have been properly set up before the rep movs;
> your names for those are tmp0 and tmp2. You have just loaded the last
> word of the source to %edi, and AFAICT %esi aka tmp2 is entirely
> uninitialized at this point (the only use is in L16_byteswap).
>
> I must be missing something. Please enlighten me.

No, you're right.  It looks like rep movsl needs src in %esi and dest
needs to be in %edi, so I can't reuse the input registers from
-mregparm=3; a pair of movs is required.  A v4 is required.

Probably should write a test for memcpy where n > magic constant 680.
-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] x86, mem: move memmove to out of line assembler
  2022-09-28 19:06                     ` Nick Desaulniers
@ 2022-09-28 20:49                       ` Nick Desaulniers
  2022-09-28 21:05                         ` [PATCH v4] " Nick Desaulniers
  0 siblings, 1 reply; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-28 20:49 UTC (permalink / raw)
  To: Rasmus Villemoes
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski

On Wed, Sep 28, 2022 at 12:06 PM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> On Wed, Sep 28, 2022 at 12:24 AM Rasmus Villemoes
> <linux@rasmusvillemoes.dk> wrote:
> >
> > On 27/09/2022 23.02, Nick Desaulniers wrote:
> >
> > > +     /*
> > > +      * Handle data forward by movs.
> > > +      */
> > > +.p2align 4
> > > +.Lforward_movs:
> > > +     movl    -4(src, n), tmp0
> > > +     leal    -4(dest, n), tmp1
> > > +     shrl    $2, n
> > > +     rep     movsl
> > > +     movl    tmp0, (tmp1)
> > > +     jmp     .Ldone
> >
> > So in the original code, %1 was forced to be %esi and %2 was forced to
> > be %edi and they were initialized by src and dest. But here I fail to
> > see how those registers have been properly set up before the rep movs;
> > your names for those are tmp0 and tmp2. You have just loaded the last
> > word of the source to %edi, and AFAICT %esi aka tmp2 is entirely
> > uninitialized at this point (the only use is in L16_byteswap).
> >
> > I must be missing something. Please enlighten me.
>
> No, you're right.  It looks like rep movsl needs src in %esi and dest
> needs to be in %edi, so I can't reuse the input registers from
> -mregparm=3; a pair of movs is required.  A v4 is required.
>
> Probably should write a test for memcpy where n > magic constant 680.

This unit test hangs with v3 (and passes with my local v4 which I
haven't sent out yet):
```
index 62f8ffcbbaa3..c2e852762846 100644
--- a/lib/memcpy_kunit.c
+++ b/lib/memcpy_kunit.c
@@ -107,6 +107,8 @@ static void memcpy_test(struct kunit *test)
 #undef TEST_OP
 }

+static unsigned char larger_array [2048];
+
 static void memmove_test(struct kunit *test)
 {
 #define TEST_OP "memmove"
@@ -181,6 +183,20 @@ static void memmove_test(struct kunit *test)
        ptr = &overlap.data[2];
        memmove(ptr, overlap.data, 5);
        compare("overlapping write", overlap, overlap_expected);
+
+       /* Verify larger overlapping moves. */
+       larger_array[256] = 0xaa;
+       memmove(larger_array, larger_array + 256, 1024);
+       KUNIT_ASSERT_EQ(test, larger_array[0], 0xaa);
+       KUNIT_ASSERT_EQ(test, larger_array[256], 0x00);
+       KUNIT_ASSERT_NULL(test,
+               memchr(larger_array + 1, 0xaa, ARRAY_SIZE(larger_array) - 1));
```
I'll include the tests in my v4, including another for overlapping
memmove forwards.
-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-28 20:49                       ` Nick Desaulniers
@ 2022-09-28 21:05                         ` Nick Desaulniers
  2022-09-28 22:03                           ` Kees Cook
                                             ` (4 more replies)
  0 siblings, 5 replies; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-28 21:05 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen
  Cc: x86, H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes,
	Nick Desaulniers

When building ARCH=i386 with CONFIG_LTO_CLANG_FULL=y, it's possible
(depending on additional configs which I have not been able to isolate)
to observe a failure during register allocation:

  error: inline assembly requires more registers than available

when memmove is inlined into tcp_v4_fill_cb() or tcp_v6_fill_cb().

memmove is quite large and probably shouldn't be inlined due to size
alone. A noinline function attribute would be the simplest fix, but
there's a few things that stand out with the current definition:

In addition to having complex constraints that can't always be resolved,
the clobber list seems to be missing %bx and %dx, and possibly %cl. By
using numbered operands rather than symbolic operands, the constraints
are quite obnoxious to refactor.

Having a large function be 99% inline asm is a code smell that this
function should simply be written in stand-alone out-of-line assembler.
That gives the opportunity for other cleanups like fixing the
inconsistent use of tabs vs spaces and instruction suffixes, and the
label 3 appearing twice.  Symbolic operands and local labels would
provide this code with a fresh coat of paint.

Moving this to out of line assembler guarantees that the
compiler cannot inline calls to memmove.

This has been done previously for 64b:
commit 9599ec0471de ("x86-64, mem: Convert memmove() to assembly file
and fix return value bug")

Also, add a test that tickles the `rep movsl` implementation to test it
for correctness, since it has implicit operands.

Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
---
Changes v3 -> v4:
* Fix bug I introduced in v1 when I changed src and dest to use
  different scratch registers, which breaks `rep movsl` as pointed out
  by Rasmus. This requires 2 `movl`s earlier on, changing the tmp
  registers, then adjusting which registers we save/restore by the
  calling convention. I intentionally did not carry forward tags from
  Kees from v3 due to this bug.
* Add a Kunit test that hangs on v3, but passes on v4. It uses a few
  magic constants as well in order to test the `rep movsl` paths.

Changes v2 -> v3:
* Fix bug I introduced in v1 when I changed one of temp register
  operands for one of the two instructions performing a mem to mem swap,
  but not the other instruction's operand, and discovered by Kees.
  Verified KUnit memcpy tests are passing via:
  $ ./tools/testing/kunit/kunit.py run --arch=i386 --make_options LLVM=1
  $ ./tools/testing/kunit/kunit.py run --arch=i386
  Fixed by using symbolic identifiers rather than open coded registers
  for the less-than-word-size temporary registers.
* Expand the comment about callee saved registers on i386 with a
  reference to the psABI.

Changes v1 -> v2:
* Add reference to 9599ec0471de in commit message.
* Include asm/export.h then make sure to EXPORT_SYMBOL(memmove).

 arch/x86/lib/Makefile     |   1 +
 arch/x86/lib/memcpy_32.c  | 187 -------------------------------
 arch/x86/lib/memmove_32.S | 226 ++++++++++++++++++++++++++++++++++++++
 lib/memcpy_kunit.c        |  21 ++++
 4 files changed, 248 insertions(+), 187 deletions(-)
 create mode 100644 arch/x86/lib/memmove_32.S

diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index f76747862bd2..9a0b8ed782e2 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -60,6 +60,7 @@ ifeq ($(CONFIG_X86_32),y)
         lib-y += checksum_32.o
         lib-y += strstr_32.o
         lib-y += string_32.o
+        lib-y += memmove_32.o
 ifneq ($(CONFIG_X86_CMPXCHG64),y)
         lib-y += cmpxchg8b_emu.o atomic64_386_32.o
 endif
diff --git a/arch/x86/lib/memcpy_32.c b/arch/x86/lib/memcpy_32.c
index ef3af7ff2c8a..a29b64befb93 100644
--- a/arch/x86/lib/memcpy_32.c
+++ b/arch/x86/lib/memcpy_32.c
@@ -17,190 +17,3 @@ __visible void *memset(void *s, int c, size_t count)
 	return __memset(s, c, count);
 }
 EXPORT_SYMBOL(memset);
-
-__visible void *memmove(void *dest, const void *src, size_t n)
-{
-	int d0,d1,d2,d3,d4,d5;
-	char *ret = dest;
-
-	__asm__ __volatile__(
-		/* Handle more 16 bytes in loop */
-		"cmp $0x10, %0\n\t"
-		"jb	1f\n\t"
-
-		/* Decide forward/backward copy mode */
-		"cmp %2, %1\n\t"
-		"jb	2f\n\t"
-
-		/*
-		 * movs instruction have many startup latency
-		 * so we handle small size by general register.
-		 */
-		"cmp  $680, %0\n\t"
-		"jb 3f\n\t"
-		/*
-		 * movs instruction is only good for aligned case.
-		 */
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 4f\n\t"
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes forward in each loop.
-		 */
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov 2*4(%1), %3\n\t"
-		"mov 3*4(%1), %4\n\t"
-		"mov  %3, 2*4(%2)\n\t"
-		"mov  %4, 3*4(%2)\n\t"
-		"lea  0x10(%1), %1\n\t"
-		"lea  0x10(%2), %2\n\t"
-		"jae 3b\n\t"
-		"add $0x10, %0\n\t"
-		"jmp 1f\n\t"
-
-		/*
-		 * Handle data forward by movs.
-		 */
-		".p2align 4\n\t"
-		"4:\n\t"
-		"mov -4(%1, %0), %3\n\t"
-		"lea -4(%2, %0), %4\n\t"
-		"shr $2, %0\n\t"
-		"rep movsl\n\t"
-		"mov %3, (%4)\n\t"
-		"jmp 11f\n\t"
-		/*
-		 * Handle data backward by movs.
-		 */
-		".p2align 4\n\t"
-		"6:\n\t"
-		"mov (%1), %3\n\t"
-		"mov %2, %4\n\t"
-		"lea -4(%1, %0), %1\n\t"
-		"lea -4(%2, %0), %2\n\t"
-		"shr $2, %0\n\t"
-		"std\n\t"
-		"rep movsl\n\t"
-		"mov %3,(%4)\n\t"
-		"cld\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Start to prepare for backward copy.
-		 */
-		".p2align 4\n\t"
-		"2:\n\t"
-		"cmp  $680, %0\n\t"
-		"jb 5f\n\t"
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 6b\n\t"
-
-		/*
-		 * Calculate copy position to tail.
-		 */
-		"5:\n\t"
-		"add %0, %1\n\t"
-		"add %0, %2\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes backward in each loop.
-		 */
-		"7:\n\t"
-		"sub $0x10, %0\n\t"
-
-		"mov -1*4(%1), %3\n\t"
-		"mov -2*4(%1), %4\n\t"
-		"mov  %3, -1*4(%2)\n\t"
-		"mov  %4, -2*4(%2)\n\t"
-		"mov -3*4(%1), %3\n\t"
-		"mov -4*4(%1), %4\n\t"
-		"mov  %3, -3*4(%2)\n\t"
-		"mov  %4, -4*4(%2)\n\t"
-		"lea  -0x10(%1), %1\n\t"
-		"lea  -0x10(%2), %2\n\t"
-		"jae 7b\n\t"
-		/*
-		 * Calculate copy position to head.
-		 */
-		"add $0x10, %0\n\t"
-		"sub %0, %1\n\t"
-		"sub %0, %2\n\t"
-
-		/*
-		 * Move data from 8 bytes to 15 bytes.
-		 */
-		".p2align 4\n\t"
-		"1:\n\t"
-		"cmp $8, %0\n\t"
-		"jb 8f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov -2*4(%1, %0), %5\n\t"
-		"mov -1*4(%1, %0), %1\n\t"
-
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov  %5, -2*4(%2, %0)\n\t"
-		"mov  %1, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 4 bytes to 7 bytes.
-		 */
-		".p2align 4\n\t"
-		"8:\n\t"
-		"cmp $4, %0\n\t"
-		"jb 9f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov -1*4(%1, %0), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 2 bytes to 3 bytes.
-		 */
-		".p2align 4\n\t"
-		"9:\n\t"
-		"cmp $2, %0\n\t"
-		"jb 10f\n\t"
-		"movw 0*2(%1), %%dx\n\t"
-		"movw -1*2(%1, %0), %%bx\n\t"
-		"movw %%dx, 0*2(%2)\n\t"
-		"movw %%bx, -1*2(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data for 1 byte.
-		 */
-		".p2align 4\n\t"
-		"10:\n\t"
-		"cmp $1, %0\n\t"
-		"jb 11f\n\t"
-		"movb (%1), %%cl\n\t"
-		"movb %%cl, (%2)\n\t"
-		".p2align 4\n\t"
-		"11:"
-		: "=&c" (d0), "=&S" (d1), "=&D" (d2),
-		  "=r" (d3),"=r" (d4), "=r"(d5)
-		:"0" (n),
-		 "1" (src),
-		 "2" (dest)
-		:"memory");
-
-	return ret;
-
-}
-EXPORT_SYMBOL(memmove);
diff --git a/arch/x86/lib/memmove_32.S b/arch/x86/lib/memmove_32.S
new file mode 100644
index 000000000000..35d9dd24624e
--- /dev/null
+++ b/arch/x86/lib/memmove_32.S
@@ -0,0 +1,226 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/linkage.h>
+#include <asm/export.h>
+
+SYM_FUNC_START(memmove)
+/*
+ * void *memmove(void *dest_in, const void *src_in, size_t n)
+ * -mregparm=3 passes these in registers:
+ * dest_in: %eax
+ * src_in: %edx
+ * n: %ecx
+ *
+ * n can remain in %ecx, but for `rep movsl`, we'll need dest in %edi and src
+ * in %esi.
+ */
+.set dest_in, %eax
+.set dest, %edi
+.set src_in, %edx
+.set src, %esi
+.set n, %ecx
+
+/*
+ * Need 3 scratch registers. These need to be saved+restored. Section 3.2.1
+ * Footnote 7 of the System V Application Binary Interface Version 1.0 aka
+ * "psABI" notes:
+ *   Note that in contrast to the Intel386 ABI, %rdi, and %rsi belong to the
+ *   called function, not the caller.
+ * i.e. %edi and %esi are callee saved for i386 (because they belong to the
+ * caller).
+ */
+.set tmp0, %edx
+.set tmp0w, %dx
+.set tmp1, %ebx
+.set tmp1w, %bx
+.set tmp2, %eax
+.set tmp3b, %cl
+
+	pushl	%ebp
+	movl	%esp, %ebp
+
+	pushl	dest_in
+	pushl	dest
+	pushl	src
+	pushl	tmp1
+
+	movl src_in, src
+	movl dest_in, dest
+
+	/* Handle more 16 bytes in loop */
+	cmpl	$0x10, n
+	jb	.L16_byteswap
+
+	/* Decide forward/backward copy mode */
+	cmpl	dest, src
+	jb	.Lbackwards_header
+
+	/*
+	 * movs instruction have many startup latency
+	 * so we handle small size by general register.
+	 */
+	cmpl	$680, n
+	jb	.Ltoo_small_forwards
+	/*
+	 * movs instruction is only good for aligned case.
+	 */
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lforward_movs
+.Ltoo_small_forwards:
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes forward in each loop.
+	 */
+.L16_byteswap_forwards_loop:
+	subl	$0x10, n
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	2*4(src), tmp0
+	movl	3*4(src), tmp1
+	movl	tmp0, 2*4(dest)
+	movl	tmp1, 3*4(dest)
+	leal	0x10(src), src
+	leal	0x10(dest), dest
+	jae	.L16_byteswap_forwards_loop
+	addl	$0x10, n
+	jmp	.L16_byteswap
+
+	/*
+	 * Handle data forward by movs.
+	 */
+.p2align 4
+.Lforward_movs:
+	movl	-4(src, n), tmp0
+	leal	-4(dest, n), tmp1
+	shrl	$2, n
+	rep	movsl
+	movl	tmp0, (tmp1)
+	jmp	.Ldone
+	/*
+	 * Handle data backward by movs.
+	 */
+.p2align 4
+.Lbackwards_movs:
+	movl	(src), tmp0
+	movl	dest, tmp1
+	leal	-4(src, n), src
+	leal	-4(dest, n), dest
+	shrl	$2, n
+	std
+	rep	movsl
+	movl	tmp0,(tmp1)
+	cld
+	jmp	.Ldone
+
+	/*
+	 * Start to prepare for backward copy.
+	 */
+.p2align 4
+.Lbackwards_header:
+	cmpl	$680, n
+	jb	.Ltoo_small_backwards
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lbackwards_movs
+
+	/*
+	 * Calculate copy position to tail.
+	 */
+.Ltoo_small_backwards:
+	addl	n, src
+	addl	n, dest
+	subl	$0x10, n
+
+	/*
+	 * We gobble 16 bytes backward in each loop.
+	 */
+.L16_byteswap_backwards_loop:
+	subl	$0x10, n
+
+	movl	-1*4(src), tmp0
+	movl	-2*4(src), tmp1
+	movl	tmp0, -1*4(dest)
+	movl	tmp1, -2*4(dest)
+	movl	-3*4(src), tmp0
+	movl	-4*4(src), tmp1
+	movl	tmp0, -3*4(dest)
+	movl	tmp1, -4*4(dest)
+	leal	-0x10(src), src
+	leal	-0x10(dest), dest
+	jae	.L16_byteswap_backwards_loop
+	/*
+	 * Calculate copy position to head.
+	 */
+	addl	$0x10, n
+	subl	n, src
+	subl	n, dest
+
+	/*
+	 * Move data from 8 bytes to 15 bytes.
+	 */
+.p2align 4
+.L16_byteswap:
+	cmpl	$8, n
+	jb	.L8_byteswap
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	-2*4(src, n), tmp2
+	movl	-1*4(src, n), src
+
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	tmp2, -2*4(dest, n)
+	movl	src, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 4 bytes to 7 bytes.
+	 */
+.p2align 4
+.L8_byteswap:
+	cmpl	$4, n
+	jb	.L4_byteswap
+	movl	0*4(src), tmp0
+	movl	-1*4(src, n), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, -1*4(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data from 2 bytes to 3 bytes.
+	 */
+.p2align 4
+.L4_byteswap:
+	cmpl	$2, n
+	jb	.Lbyteswap
+	movw	0*2(src), tmp0w
+	movw	-1*2(src, n), tmp1w
+	movw	tmp0w, 0*2(dest)
+	movw	tmp1w, -1*2(dest, n)
+	jmp	.Ldone
+
+	/*
+	 * Move data for 1 byte.
+	 */
+.p2align 4
+.Lbyteswap:
+	cmpl	$1, n
+	jb	.Ldone
+	movb	(src), tmp3b
+	movb	tmp3b, (dest)
+.p2align 4
+.Ldone:
+	popl	tmp1
+	popl	src
+	popl	dest
+	popl	%eax // memmove returns dest_in
+	popl	%ebp
+	RET
+SYM_FUNC_END(memmove)
+EXPORT_SYMBOL(memmove)
diff --git a/lib/memcpy_kunit.c b/lib/memcpy_kunit.c
index 62f8ffcbbaa3..52bce4f697a5 100644
--- a/lib/memcpy_kunit.c
+++ b/lib/memcpy_kunit.c
@@ -107,6 +107,8 @@ static void memcpy_test(struct kunit *test)
 #undef TEST_OP
 }
 
+static unsigned char larger_array [2048];
+
 static void memmove_test(struct kunit *test)
 {
 #define TEST_OP "memmove"
@@ -181,6 +183,25 @@ static void memmove_test(struct kunit *test)
 	ptr = &overlap.data[2];
 	memmove(ptr, overlap.data, 5);
 	compare("overlapping write", overlap, overlap_expected);
+
+	/* Verify larger overlapping moves. */
+	larger_array[256] = 0xAAu;
+	/* Test a backwards overlapping memmove first. 256 and 1024 are
+	 * important for i386 to use rep movsl.
+	 */
+	memmove(larger_array, larger_array + 256, 1024);
+	KUNIT_ASSERT_EQ(test, larger_array[0], 0xAAu);
+	KUNIT_ASSERT_EQ(test, larger_array[256], 0x00);
+	KUNIT_ASSERT_NULL(test,
+		memchr(larger_array + 1, 0xaa, ARRAY_SIZE(larger_array) - 1));
+	/* Test a forwards overlapping memmove. */
+	larger_array[0] = 0xBBu;
+	memmove(larger_array + 256, larger_array, 1024);
+	KUNIT_ASSERT_EQ(test, larger_array[0], 0xBBu);
+	KUNIT_ASSERT_EQ(test, larger_array[256], 0xBBu);
+	KUNIT_ASSERT_NULL(test, memchr(larger_array + 1, 0xBBu, 256 - 1));
+	KUNIT_ASSERT_NULL(test,
+		memchr(larger_array + 257, 0xBBu, ARRAY_SIZE(larger_array) - 257));
 #undef TEST_OP
 }
 
-- 
2.37.3.998.g577e59143f-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-28 21:05                         ` [PATCH v4] " Nick Desaulniers
@ 2022-09-28 22:03                           ` Kees Cook
  2022-09-29  7:01                           ` Ingo Molnar
                                             ` (3 subsequent siblings)
  4 siblings, 0 replies; 26+ messages in thread
From: Kees Cook @ 2022-09-28 22:03 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, linux-kernel, Linus Torvalds,
	llvm, Andy Lutomirski, Rasmus Villemoes

On Wed, Sep 28, 2022 at 02:05:12PM -0700, Nick Desaulniers wrote:
> When building ARCH=i386 with CONFIG_LTO_CLANG_FULL=y, it's possible
> (depending on additional configs which I have not been able to isolate)
> to observe a failure during register allocation:
> 
>   error: inline assembly requires more registers than available
> 
> when memmove is inlined into tcp_v4_fill_cb() or tcp_v6_fill_cb().
> 
> memmove is quite large and probably shouldn't be inlined due to size
> alone. A noinline function attribute would be the simplest fix, but
> there's a few things that stand out with the current definition:
> 
> In addition to having complex constraints that can't always be resolved,
> the clobber list seems to be missing %bx and %dx, and possibly %cl. By
> using numbered operands rather than symbolic operands, the constraints
> are quite obnoxious to refactor.
> 
> Having a large function be 99% inline asm is a code smell that this
> function should simply be written in stand-alone out-of-line assembler.
> That gives the opportunity for other cleanups like fixing the
> inconsistent use of tabs vs spaces and instruction suffixes, and the
> label 3 appearing twice.  Symbolic operands and local labels would
> provide this code with a fresh coat of paint.
> 
> Moving this to out of line assembler guarantees that the
> compiler cannot inline calls to memmove.
> 
> This has been done previously for 64b:
> commit 9599ec0471de ("x86-64, mem: Convert memmove() to assembly file
> and fix return value bug")
> 
> Also, add a test that tickles the `rep movsl` implementation to test it
> for correctness, since it has implicit operands.

Yeah, thanks for poking this in particular. I was bothered that the
side-effect test caught a corner case and was planning to expand the
memcpy tests even more; thank you for doing that! I've got some more
coming and can confirm they tickled the same bug.

> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>

This time I've looked at the binary differences between the functions
generated by both GCC[1] and Clang[2]. GCC is a little more difficult to
compare, since it does some register swaps, but the Clang output is same
excepting the order of push/pop, and different nops.

Reviewed-by: Kees Cook <keescook@chromium.org>

Nick's tests pass, and my newly written tests also pass; I'll send those
as a follow-up.

Tested-by: Kees Cook <keescook@chromium.org>

-Kees

[1] https://paste.debian.net/hidden/b6298e62/
[2] https://paste.debian.net/hidden/d8343143/

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-28 21:05                         ` [PATCH v4] " Nick Desaulniers
  2022-09-28 22:03                           ` Kees Cook
@ 2022-09-29  7:01                           ` Ingo Molnar
  2022-09-29  8:02                           ` Ingo Molnar
                                             ` (2 subsequent siblings)
  4 siblings, 0 replies; 26+ messages in thread
From: Ingo Molnar @ 2022-09-29  7:01 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes


* Nick Desaulniers <ndesaulniers@google.com> wrote:

> +	/* Test a backwards overlapping memmove first. 256 and 1024 are
> +	 * important for i386 to use rep movsl.
> +	 */

Nit, please use the customary (multi-line) comment style:

  /*
   * Comment .....
   * ...... goes here.
   */

specified in Documentation/CodingStyle.

Thanks,

        Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-28 21:05                         ` [PATCH v4] " Nick Desaulniers
  2022-09-28 22:03                           ` Kees Cook
  2022-09-29  7:01                           ` Ingo Molnar
@ 2022-09-29  8:02                           ` Ingo Molnar
  2022-09-29 17:26                             ` Nick Desaulniers
  2022-09-30  9:55                           ` David Laight
  2022-09-30 10:14                           ` [PATCH v4] " David Laight
  4 siblings, 1 reply; 26+ messages in thread
From: Ingo Molnar @ 2022-09-29  8:02 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes


* Nick Desaulniers <ndesaulniers@google.com> wrote:

> +SYM_FUNC_START(memmove)
> +/*
> + * void *memmove(void *dest_in, const void *src_in, size_t n)
> + * -mregparm=3 passes these in registers:
> + * dest_in: %eax
> + * src_in: %edx
> + * n: %ecx
> + *
> + * n can remain in %ecx, but for `rep movsl`, we'll need dest in %edi and src
> + * in %esi.
> + */
> +.set dest_in, %eax
> +.set dest, %edi
> +.set src_in, %edx
> +.set src, %esi
> +.set n, %ecx
> +
> +/*
> + * Need 3 scratch registers. These need to be saved+restored. Section 3.2.1
> + * Footnote 7 of the System V Application Binary Interface Version 1.0 aka
> + * "psABI" notes:
> + *   Note that in contrast to the Intel386 ABI, %rdi, and %rsi belong to the
> + *   called function, not the caller.
> + * i.e. %edi and %esi are callee saved for i386 (because they belong to the
> + * caller).
> + */
> +.set tmp0, %edx
> +.set tmp0w, %dx
> +.set tmp1, %ebx
> +.set tmp1w, %bx
> +.set tmp2, %eax
> +.set tmp3b, %cl
> +
> +	pushl	%ebp
> +	movl	%esp, %ebp
> +
> +	pushl	dest_in
> +	pushl	dest
> +	pushl	src
> +	pushl	tmp1

Yeah, so you did various whitespace & indentation cleanups, and I think if 
we are touching trivialities we might as well fix/improve the documentation 
of this function too...

For example the comments around parameters and register clobbering are 
somewhat inaccurate and actively obfuscate what is going on.

1)

Firstly, the function uses not "3 scratch registers", but four:

   eax [tmp2]
   ebx [tmp1]
   ecx [tmp3]
   edx [tmp0]

[ Confusion probably comes from the fact that the main logic uses 3 of 
  these registers to move stuff around: tmp0/1/2, and tmp3 is clobbered as 
  part of the 'byteswap' branch. ]

2)

The description of the calling convention is needlessly obfuscated with 
calling standards details. If we want to mention it to make it clear what 
we are saving on the stack and what not, the best description is the one 
from calling.h:

   x86 function calling convention, 32-bit:
   ----------------------------------------
    arguments         | callee-saved        | extra caller-saved | return
   [callee-clobbered] |                     | [callee-clobbered] |
   -------------------------------------------------------------------------
   eax edx ecx        | ebx edi esi ebp [*] | <none>             | eax

This makes it clear that of the 4 temporary scratch registers used by 
memmove(), only ebx [tmp1] needs to be saved explicitly.

Beyond the (content-)scratch registers, the function will also internally 
clobber three other registers:

   esi [src]
   edi [dest]
   ebp [frame pointer]

These esi/edi are the indices into the memory regions.

Since esi/edi are callee-saved, these need to be saved/restored too.

This fully explains the prologue - with annotations in the comments added 
by me:

+       pushl   %ebp                // save callee-saved ebp
+       movl    %esp, %ebp          // set standard frame pointer

+       pushl   dest_in             // 'dest_in' will be the return value
+       pushl   dest                // save callee-saved edi
+       pushl   src                 // save callee-saved esi
+       pushl   tmp1                // save callee-saved ebx

...

+       popl    tmp1                // restore callee-saved ebx
+       popl    src                 // restore callee-saved esi
+       popl    dest                // restore callee-saved edi
+       popl    %eax                // memmove returns 'dest_in'

+       popl    %ebp                // restore callee-saved ebp
+       RET

3)

But since this large function clobbers *all* callee-saved general purpose 
registers of the i386 kernel function call ABI, we might as well make that 
explicit, via something like:

        /*
         * Save all callee-saved registers, because this function is
         * going to clobber all of them:
         */
        pushl   %ebp
        movl    %esp, %ebp          // set standard frame pointer
        pushl   %ebx
        pushl   %edi
        pushl   %esi

        pushl   dest_in             // save 'dest_in' parameter [eax] as the return value

        ...

        popl    dest_in             // restore 'dest_in' [eax] as the return value

        /* Restore all callee-saved registers: */
        popl    %esi
        popl    %edi
        popl    %ebx
        popl    %ebp

        RET

This IMO makes it a lot more clear what is going on in the 
prologue/epilogue and why.

Feel free to carry these changes over into your patch.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-29  8:02                           ` Ingo Molnar
@ 2022-09-29 17:26                             ` Nick Desaulniers
  0 siblings, 0 replies; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-29 17:26 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes

On Thu, Sep 29, 2022 at 1:02 AM Ingo Molnar <mingo@kernel.org> wrote:
>
> Yeah, so you did various whitespace & indentation cleanups, and I think if
> we are touching trivialities we might as well fix/improve the documentation
> of this function too...

Yes, these are all wonderful suggestions.  My hope is to also improve
the readability of the implementation. I will incorporate your
suggestions into a v5 and credit you with a suggested-by tag.

> calling standards details. If we want to mention it to make it clear what
> we are saving on the stack and what not, the best description is the one
> from calling.h:
>
>    x86 function calling convention, 32-bit:
>    ----------------------------------------
>     arguments         | callee-saved        | extra caller-saved | return
>    [callee-clobbered] |                     | [callee-clobbered] |
>    -------------------------------------------------------------------------
>    eax edx ecx        | ebx edi esi ebp [*] | <none>             | eax

Oh! Perfect! I'm so glad this table exists!

> Feel free to carry these changes over into your patch.

Will do, thanks.
-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-28 21:05                         ` [PATCH v4] " Nick Desaulniers
                                             ` (2 preceding siblings ...)
  2022-09-29  8:02                           ` Ingo Molnar
@ 2022-09-30  9:55                           ` David Laight
  2022-09-30 16:43                             ` Nick Desaulniers
  2022-09-30 10:14                           ` [PATCH v4] " David Laight
  4 siblings, 1 reply; 26+ messages in thread
From: David Laight @ 2022-09-30  9:55 UTC (permalink / raw)
  To: 'Nick Desaulniers',
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen
  Cc: x86, H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes

From: Nick Desaulniers
> Sent: 28 September 2022 22:05
...
> memmove is quite large and probably shouldn't be inlined due to size
> alone
..
> +	/* Decide forward/backward copy mode */
> +	cmpl	dest, src
> +	jb	.Lbackwards_header

It has to be better to do the slightly more complicated
test 'dest - src < size' (as unsigned) so that reverse
copies are only done when absolutely necessary.

Ignoring pathological cases on some cpu families the
forwards copy will benefit from hardware cache prefetch.

I also have a vague recollection of std and cld being slow.

Oh - and why do all the labels have 'byteswap' in them?

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-28 21:05                         ` [PATCH v4] " Nick Desaulniers
                                             ` (3 preceding siblings ...)
  2022-09-30  9:55                           ` David Laight
@ 2022-09-30 10:14                           ` David Laight
  4 siblings, 0 replies; 26+ messages in thread
From: David Laight @ 2022-09-30 10:14 UTC (permalink / raw)
  To: 'Nick Desaulniers',
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen
  Cc: x86, H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes

From: Nick Desaulniers
> Sent: 28 September 2022 22:05
...

Reading it again, what is this test supposed to achieve?

> +	/*
> +	 * movs instruction is only good for aligned case.
> +	 */
> +	movl	src, tmp0
> +	xorl	dest, tmp0
> +	andl	$0xff, tmp0
> +	jz	.Lforward_movs

The 'aligned' test would be '(src | dest) & 3'.
(Or maybe '& 7' since some 32bit x86 cpu actally
do 8 byte aligned 'rep movsl' faster than 4 byte
aligned ones.
OTOH the code loop is likely to be slower still.

I've not tried measuring misaligned 'rep movsw' but
on some recent intel cpu normal misaligned reads cost
almost nothing - even when doing two reads/clock.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-30  9:55                           ` David Laight
@ 2022-09-30 16:43                             ` Nick Desaulniers
  2022-09-30 16:46                               ` Linus Torvalds
  0 siblings, 1 reply; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-30 16:43 UTC (permalink / raw)
  To: David Laight
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, x86,
	H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes

On Fri, Sep 30, 2022 at 2:55 AM David Laight <David.Laight@aculab.com> wrote:
>
> Oh - and why do all the labels have 'byteswap' in them?

.Lbyteswap is swapping single bytes at a time.
.L4_byteswap is swapping 4 bytes at a time.
.L8_byteswap is swapping 8 bytes at a time.
.L16_byteswap, .L16_byteswap_backwards_loop, and
.L16_byteswap_forwards_loop are swapping 16 bytes at a time.

When doing a move that has to be delicate with respect to overlap, I
thought it was helpful to know how many bytes are being swapped in a
given run of instructions. The names were chosen were based on the
comparison/guards/conditional jmps.

---

Linus pointed out that this 32b memmove looks similar to 64b
memcpy_orig.  Would you prefer me use those labels?

s/.Lbyteswap/.Lstore_1byte/
s/.L4_byteswap/.Lless_3bytes/
s/.L8_byteswap/.Lless_8bytes/
s/.L16_byteswap/.Lless_16bytes/
s/.L16_byteswap_backwards_loop/.Lcopy_backward_loop/
s/.L16_byteswap_forwards_loop/.Lcopy_forward_loop/

or something else?
-- 
Thanks,
~Nick Desaulniers

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] x86, mem: move memmove to out of line assembler
  2022-09-30 16:43                             ` Nick Desaulniers
@ 2022-09-30 16:46                               ` Linus Torvalds
  2022-09-30 18:55                                 ` [PATCH v5] " Nick Desaulniers
  0 siblings, 1 reply; 26+ messages in thread
From: Linus Torvalds @ 2022-09-30 16:46 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: David Laight, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, x86, H . Peter Anvin, Peter Zijlstra, Kees Cook,
	linux-kernel, llvm, Andy Lutomirski, Rasmus Villemoes

On Fri, Sep 30, 2022 at 9:43 AM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> On Fri, Sep 30, 2022 at 2:55 AM David Laight <David.Laight@aculab.com> wrote:
> >
> > Oh - and why do all the labels have 'byteswap' in them?
>
> .Lbyteswap is swapping single bytes at a time.
> .L4_byteswap is swapping 4 bytes at a time.
> .L8_byteswap is swapping 8 bytes at a time.
> .L16_byteswap, .L16_byteswap_backwards_loop, and
> .L16_byteswap_forwards_loop are swapping 16 bytes at a time.

I think the objection here is that there is no "swap".

A "byte swap" in particular is generally a byte order operation (ie
swapping bytes within one word). And "swap" in general is about
switching the value of two things.

Here, the "byteswap" code sequences just move data in one direction.
No "swap" anywhere that I can see.

             Linus

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5] x86, mem: move memmove to out of line assembler
  2022-09-30 16:46                               ` Linus Torvalds
@ 2022-09-30 18:55                                 ` Nick Desaulniers
  0 siblings, 0 replies; 26+ messages in thread
From: Nick Desaulniers @ 2022-09-30 18:55 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen
  Cc: x86, H . Peter Anvin, Peter Zijlstra, Kees Cook, linux-kernel,
	Linus Torvalds, llvm, Andy Lutomirski, Rasmus Villemoes,
	Nick Desaulniers, Ingo Molnar, David Laight

When building ARCH=i386 with CONFIG_LTO_CLANG_FULL=y, it's possible
(depending on additional configs which I have not been able to isolate)
to observe a failure during register allocation:

  error: inline assembly requires more registers than available

when memmove is inlined into tcp_v4_fill_cb() or tcp_v6_fill_cb().

memmove is quite large and probably shouldn't be inlined due to size
alone. A noinline function attribute would be the simplest fix, but
there's a few things that stand out with the current definition:

In addition to having complex constraints that can't always be resolved,
the clobber list seems to be missing %bx. By using numbered operands
rather than symbolic operands, the constraints are quite obnoxious to
refactor.

Having a large function be 99% inline asm is a code smell that this
function should simply be written in stand-alone out-of-line assembler.

Moving this to out of line assembler guarantees that the
compiler cannot inline calls to memmove.

This has been done previously for 64b:
commit 9599ec0471de ("x86-64, mem: Convert memmove() to assembly file
and fix return value bug")

That gives the opportunity for other cleanups like fixing the
inconsistent use of tabs vs spaces and instruction suffixes, and the
label 3 appearing twice.  Symbolic operands, local labels, and
additional comments would provide this code with a fresh coat of paint.

Finally, add a test that tickles the `rep movsl` implementation to test
it for correctness, since it has implicit operands.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Suggested-by: David Laight <David.Laight@aculab.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
---
Changes v4 -> v5:
* Reword+reorder commit message, in particular drop the "maybe" language
  around %dx and %cl clobber, now that I know of arch/x86/entry/calling.h.
* Drop most of my previous comments about caller vs. callee saved, as
  per Ingo, and use Ingo's comments verbatim.
* Reference arch/x86/entry/calling.h in comment.
* Reorder pushl/popl. NFC
* Rename labels as per David. 16_byteswap -> move_16B
* Use /* comments */ more consistently, as per Ingo.
* Add Ingo's+David's SB tags.
* Carry forward Kees' RB/TB tags.

Changes v3 -> v4:
* Fix bug I introduced in v1 when I changed src and dest to use
  different scratch registers, which breaks `rep movsl` as pointed out
  by Rasmus. This requires 2 `movl`s earlier on, changing the tmp
  registers, then adjusting which registers we save/restore by the
  calling convention. I intentionally did not carry forward tags from
  Kees from v3 due to this bug.
* Add a Kunit test that hangs on v3, but passes on v4. It uses a few
  magic constants as well in order to test the `rep movsl` paths.

Changes v2 -> v3:
* Fix bug I introduced in v1 when I changed one of temp register
  operands for one of the two instructions performing a mem to mem swap,
  but not the other instruction's operand, and discovered by Kees.
  Verified KUnit memcpy tests are passing via:
  $ ./tools/testing/kunit/kunit.py run --arch=i386 --make_options LLVM=1
  $ ./tools/testing/kunit/kunit.py run --arch=i386
  Fixed by using symbolic identifiers rather than open coded registers
  for the less-than-word-size temporary registers.
* Expand the comment about callee saved registers on i386 with a
  reference to the psABI.

Changes v1 -> v2:
* Add reference to 9599ec0471de in commit message.
* Include asm/export.h then make sure to EXPORT_SYMBOL(memmove).

 arch/x86/lib/Makefile     |   1 +
 arch/x86/lib/memcpy_32.c  | 187 -----------------------------------
 arch/x86/lib/memmove_32.S | 200 ++++++++++++++++++++++++++++++++++++++
 lib/memcpy_kunit.c        |  22 +++++
 4 files changed, 223 insertions(+), 187 deletions(-)
 create mode 100644 arch/x86/lib/memmove_32.S

diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
index f76747862bd2..9a0b8ed782e2 100644
--- a/arch/x86/lib/Makefile
+++ b/arch/x86/lib/Makefile
@@ -60,6 +60,7 @@ ifeq ($(CONFIG_X86_32),y)
         lib-y += checksum_32.o
         lib-y += strstr_32.o
         lib-y += string_32.o
+        lib-y += memmove_32.o
 ifneq ($(CONFIG_X86_CMPXCHG64),y)
         lib-y += cmpxchg8b_emu.o atomic64_386_32.o
 endif
diff --git a/arch/x86/lib/memcpy_32.c b/arch/x86/lib/memcpy_32.c
index ef3af7ff2c8a..a29b64befb93 100644
--- a/arch/x86/lib/memcpy_32.c
+++ b/arch/x86/lib/memcpy_32.c
@@ -17,190 +17,3 @@ __visible void *memset(void *s, int c, size_t count)
 	return __memset(s, c, count);
 }
 EXPORT_SYMBOL(memset);
-
-__visible void *memmove(void *dest, const void *src, size_t n)
-{
-	int d0,d1,d2,d3,d4,d5;
-	char *ret = dest;
-
-	__asm__ __volatile__(
-		/* Handle more 16 bytes in loop */
-		"cmp $0x10, %0\n\t"
-		"jb	1f\n\t"
-
-		/* Decide forward/backward copy mode */
-		"cmp %2, %1\n\t"
-		"jb	2f\n\t"
-
-		/*
-		 * movs instruction have many startup latency
-		 * so we handle small size by general register.
-		 */
-		"cmp  $680, %0\n\t"
-		"jb 3f\n\t"
-		/*
-		 * movs instruction is only good for aligned case.
-		 */
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 4f\n\t"
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes forward in each loop.
-		 */
-		"3:\n\t"
-		"sub $0x10, %0\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov 2*4(%1), %3\n\t"
-		"mov 3*4(%1), %4\n\t"
-		"mov  %3, 2*4(%2)\n\t"
-		"mov  %4, 3*4(%2)\n\t"
-		"lea  0x10(%1), %1\n\t"
-		"lea  0x10(%2), %2\n\t"
-		"jae 3b\n\t"
-		"add $0x10, %0\n\t"
-		"jmp 1f\n\t"
-
-		/*
-		 * Handle data forward by movs.
-		 */
-		".p2align 4\n\t"
-		"4:\n\t"
-		"mov -4(%1, %0), %3\n\t"
-		"lea -4(%2, %0), %4\n\t"
-		"shr $2, %0\n\t"
-		"rep movsl\n\t"
-		"mov %3, (%4)\n\t"
-		"jmp 11f\n\t"
-		/*
-		 * Handle data backward by movs.
-		 */
-		".p2align 4\n\t"
-		"6:\n\t"
-		"mov (%1), %3\n\t"
-		"mov %2, %4\n\t"
-		"lea -4(%1, %0), %1\n\t"
-		"lea -4(%2, %0), %2\n\t"
-		"shr $2, %0\n\t"
-		"std\n\t"
-		"rep movsl\n\t"
-		"mov %3,(%4)\n\t"
-		"cld\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Start to prepare for backward copy.
-		 */
-		".p2align 4\n\t"
-		"2:\n\t"
-		"cmp  $680, %0\n\t"
-		"jb 5f\n\t"
-		"mov %1, %3\n\t"
-		"xor %2, %3\n\t"
-		"and $0xff, %3\n\t"
-		"jz 6b\n\t"
-
-		/*
-		 * Calculate copy position to tail.
-		 */
-		"5:\n\t"
-		"add %0, %1\n\t"
-		"add %0, %2\n\t"
-		"sub $0x10, %0\n\t"
-
-		/*
-		 * We gobble 16 bytes backward in each loop.
-		 */
-		"7:\n\t"
-		"sub $0x10, %0\n\t"
-
-		"mov -1*4(%1), %3\n\t"
-		"mov -2*4(%1), %4\n\t"
-		"mov  %3, -1*4(%2)\n\t"
-		"mov  %4, -2*4(%2)\n\t"
-		"mov -3*4(%1), %3\n\t"
-		"mov -4*4(%1), %4\n\t"
-		"mov  %3, -3*4(%2)\n\t"
-		"mov  %4, -4*4(%2)\n\t"
-		"lea  -0x10(%1), %1\n\t"
-		"lea  -0x10(%2), %2\n\t"
-		"jae 7b\n\t"
-		/*
-		 * Calculate copy position to head.
-		 */
-		"add $0x10, %0\n\t"
-		"sub %0, %1\n\t"
-		"sub %0, %2\n\t"
-
-		/*
-		 * Move data from 8 bytes to 15 bytes.
-		 */
-		".p2align 4\n\t"
-		"1:\n\t"
-		"cmp $8, %0\n\t"
-		"jb 8f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov 1*4(%1), %4\n\t"
-		"mov -2*4(%1, %0), %5\n\t"
-		"mov -1*4(%1, %0), %1\n\t"
-
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, 1*4(%2)\n\t"
-		"mov  %5, -2*4(%2, %0)\n\t"
-		"mov  %1, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 4 bytes to 7 bytes.
-		 */
-		".p2align 4\n\t"
-		"8:\n\t"
-		"cmp $4, %0\n\t"
-		"jb 9f\n\t"
-		"mov 0*4(%1), %3\n\t"
-		"mov -1*4(%1, %0), %4\n\t"
-		"mov  %3, 0*4(%2)\n\t"
-		"mov  %4, -1*4(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data from 2 bytes to 3 bytes.
-		 */
-		".p2align 4\n\t"
-		"9:\n\t"
-		"cmp $2, %0\n\t"
-		"jb 10f\n\t"
-		"movw 0*2(%1), %%dx\n\t"
-		"movw -1*2(%1, %0), %%bx\n\t"
-		"movw %%dx, 0*2(%2)\n\t"
-		"movw %%bx, -1*2(%2, %0)\n\t"
-		"jmp 11f\n\t"
-
-		/*
-		 * Move data for 1 byte.
-		 */
-		".p2align 4\n\t"
-		"10:\n\t"
-		"cmp $1, %0\n\t"
-		"jb 11f\n\t"
-		"movb (%1), %%cl\n\t"
-		"movb %%cl, (%2)\n\t"
-		".p2align 4\n\t"
-		"11:"
-		: "=&c" (d0), "=&S" (d1), "=&D" (d2),
-		  "=r" (d3),"=r" (d4), "=r"(d5)
-		:"0" (n),
-		 "1" (src),
-		 "2" (dest)
-		:"memory");
-
-	return ret;
-
-}
-EXPORT_SYMBOL(memmove);
diff --git a/arch/x86/lib/memmove_32.S b/arch/x86/lib/memmove_32.S
new file mode 100644
index 000000000000..0588b2c0fc95
--- /dev/null
+++ b/arch/x86/lib/memmove_32.S
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/linkage.h>
+#include <asm/export.h>
+
+SYM_FUNC_START(memmove)
+/*
+ * void *memmove(void *dest_in, const void *src_in, size_t n)
+ * -mregparm=3 passes these in registers:
+ * dest_in: %eax
+ * src_in: %edx
+ * n: %ecx
+ * See also: arch/x86/entry/calling.h for description of the calling convention.
+ *
+ * n can remain in %ecx, but for `rep movsl`, we'll need dest in %edi and src
+ * in %esi.
+ */
+.set dest_in, %eax
+.set dest, %edi
+.set src_in, %edx
+.set src, %esi
+.set n, %ecx
+.set tmp0, %edx
+.set tmp0w, %dx
+.set tmp1, %ebx
+.set tmp1w, %bx
+.set tmp2, %eax
+.set tmp3b, %cl
+
+/*
+ * Save all callee-saved registers, because this function is going to clobber
+ * all of them:
+ */
+	pushl	%ebp
+	movl	%esp, %ebp	// set standard frame pointer
+
+	pushl	%ebx
+	pushl	%edi
+	pushl	%esi
+	pushl	%eax		// save 'dest_in' parameter [eax] as the return value
+
+	movl src_in, src
+	movl dest_in, dest
+
+	/* Handle more 16 bytes in loop */
+	cmpl	$0x10, n
+	jb	.Lmove_16B
+
+	/* Decide forward/backward copy mode */
+	cmpl	dest, src
+	jb	.Lbackwards_header
+
+	/*
+	 * movs instruction have many startup latency
+	 * so we handle small size by general register.
+	 */
+	cmpl	$680, n
+	jb	.Ltoo_small_forwards
+	/* movs instruction is only good for aligned case. */
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lforward_movs
+.Ltoo_small_forwards:
+	subl	$0x10, n
+
+	/* We gobble 16 bytes forward in each loop. */
+.Lmove_16B_forwards_loop:
+	subl	$0x10, n
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	2*4(src), tmp0
+	movl	3*4(src), tmp1
+	movl	tmp0, 2*4(dest)
+	movl	tmp1, 3*4(dest)
+	leal	0x10(src), src
+	leal	0x10(dest), dest
+	jae	.Lmove_16B_forwards_loop
+	addl	$0x10, n
+	jmp	.Lmove_16B
+
+	/* Handle data forward by movs. */
+.p2align 4
+.Lforward_movs:
+	movl	-4(src, n), tmp0
+	leal	-4(dest, n), tmp1
+	shrl	$2, n
+	rep	movsl
+	movl	tmp0, (tmp1)
+	jmp	.Ldone
+
+	/* Handle data backward by movs. */
+.p2align 4
+.Lbackwards_movs:
+	movl	(src), tmp0
+	movl	dest, tmp1
+	leal	-4(src, n), src
+	leal	-4(dest, n), dest
+	shrl	$2, n
+	std
+	rep	movsl
+	movl	tmp0,(tmp1)
+	cld
+	jmp	.Ldone
+
+	/* Start to prepare for backward copy. */
+.p2align 4
+.Lbackwards_header:
+	cmpl	$680, n
+	jb	.Ltoo_small_backwards
+	movl	src, tmp0
+	xorl	dest, tmp0
+	andl	$0xff, tmp0
+	jz	.Lbackwards_movs
+
+	/* Calculate copy position to tail. */
+.Ltoo_small_backwards:
+	addl	n, src
+	addl	n, dest
+	subl	$0x10, n
+
+	/* We gobble 16 bytes backward in each loop. */
+.Lmove_16B_backwards_loop:
+	subl	$0x10, n
+
+	movl	-1*4(src), tmp0
+	movl	-2*4(src), tmp1
+	movl	tmp0, -1*4(dest)
+	movl	tmp1, -2*4(dest)
+	movl	-3*4(src), tmp0
+	movl	-4*4(src), tmp1
+	movl	tmp0, -3*4(dest)
+	movl	tmp1, -4*4(dest)
+	leal	-0x10(src), src
+	leal	-0x10(dest), dest
+	jae	.Lmove_16B_backwards_loop
+	/* Calculate copy position to head. */
+	addl	$0x10, n
+	subl	n, src
+	subl	n, dest
+
+	/* Move data from 8 bytes to 15 bytes. */
+.p2align 4
+.Lmove_16B:
+	cmpl	$8, n
+	jb	.Lmove_8B
+	movl	0*4(src), tmp0
+	movl	1*4(src), tmp1
+	movl	-2*4(src, n), tmp2
+	movl	-1*4(src, n), src
+
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, 1*4(dest)
+	movl	tmp2, -2*4(dest, n)
+	movl	src, -1*4(dest, n)
+	jmp	.Ldone
+
+	/* Move data from 4 bytes to 7 bytes. */
+.p2align 4
+.Lmove_8B:
+	cmpl	$4, n
+	jb	.Lmove_4B
+	movl	0*4(src), tmp0
+	movl	-1*4(src, n), tmp1
+	movl	tmp0, 0*4(dest)
+	movl	tmp1, -1*4(dest, n)
+	jmp	.Ldone
+
+	/* Move data from 2 bytes to 3 bytes. */
+.p2align 4
+.Lmove_4B:
+	cmpl	$2, n
+	jb	.Lmove_1B
+	movw	0*2(src), tmp0w
+	movw	-1*2(src, n), tmp1w
+	movw	tmp0w, 0*2(dest)
+	movw	tmp1w, -1*2(dest, n)
+	jmp	.Ldone
+
+	/* Move data for 1 byte. */
+.p2align 4
+.Lmove_1B:
+	cmpl	$1, n
+	jb	.Ldone
+	movb	(src), tmp3b
+	movb	tmp3b, (dest)
+.p2align 4
+.Ldone:
+	popl	dest_in	// restore 'dest_in' [eax] as the return value
+	/* Restore all callee-saved registers: */
+	popl	%esi
+	popl	%edi
+	popl	%ebx
+	popl	%ebp
+
+	RET
+SYM_FUNC_END(memmove)
+EXPORT_SYMBOL(memmove)
diff --git a/lib/memcpy_kunit.c b/lib/memcpy_kunit.c
index 62f8ffcbbaa3..fc3a5be1d10b 100644
--- a/lib/memcpy_kunit.c
+++ b/lib/memcpy_kunit.c
@@ -107,6 +107,8 @@ static void memcpy_test(struct kunit *test)
 #undef TEST_OP
 }
 
+static unsigned char larger_array [2048];
+
 static void memmove_test(struct kunit *test)
 {
 #define TEST_OP "memmove"
@@ -181,6 +183,26 @@ static void memmove_test(struct kunit *test)
 	ptr = &overlap.data[2];
 	memmove(ptr, overlap.data, 5);
 	compare("overlapping write", overlap, overlap_expected);
+
+	/* Verify larger overlapping moves. */
+	larger_array[256] = 0xAAu;
+	/*
+	 * Test a backwards overlapping memmove first. 256 and 1024 are
+	 * important for i386 to use rep movsl.
+	 */
+	memmove(larger_array, larger_array + 256, 1024);
+	KUNIT_ASSERT_EQ(test, larger_array[0], 0xAAu);
+	KUNIT_ASSERT_EQ(test, larger_array[256], 0x00);
+	KUNIT_ASSERT_NULL(test,
+		memchr(larger_array + 1, 0xaa, ARRAY_SIZE(larger_array) - 1));
+	/* Test a forwards overlapping memmove. */
+	larger_array[0] = 0xBBu;
+	memmove(larger_array + 256, larger_array, 1024);
+	KUNIT_ASSERT_EQ(test, larger_array[0], 0xBBu);
+	KUNIT_ASSERT_EQ(test, larger_array[256], 0xBBu);
+	KUNIT_ASSERT_NULL(test, memchr(larger_array + 1, 0xBBu, 256 - 1));
+	KUNIT_ASSERT_NULL(test,
+		memchr(larger_array + 257, 0xBBu, ARRAY_SIZE(larger_array) - 257));
 #undef TEST_OP
 }
 
-- 
2.38.0.rc1.362.ged0d419d3c-goog


^ permalink raw reply related	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-09-30 18:55 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-23 17:02 [PATCH] x86, mem: move memmove to out of line assembler Nick Desaulniers
2022-09-23 17:29 ` Linus Torvalds
2022-09-23 17:55   ` Nick Desaulniers
2022-09-23 18:05     ` Linus Torvalds
2022-09-27 17:03       ` Nick Desaulniers
2022-09-27 17:28         ` [PATCH v2] " Nick Desaulniers
2022-09-27 18:41           ` Kees Cook
2022-09-27 19:23           ` Kees Cook
2022-09-27 20:01             ` Nick Desaulniers
2022-09-27 20:36               ` Kees Cook
2022-09-27 21:02                 ` [PATCH v3] " Nick Desaulniers
2022-09-27 21:14                   ` Kees Cook
2022-09-28  7:24                   ` Rasmus Villemoes
2022-09-28 19:00                     ` Linus Torvalds
2022-09-28 19:06                     ` Nick Desaulniers
2022-09-28 20:49                       ` Nick Desaulniers
2022-09-28 21:05                         ` [PATCH v4] " Nick Desaulniers
2022-09-28 22:03                           ` Kees Cook
2022-09-29  7:01                           ` Ingo Molnar
2022-09-29  8:02                           ` Ingo Molnar
2022-09-29 17:26                             ` Nick Desaulniers
2022-09-30  9:55                           ` David Laight
2022-09-30 16:43                             ` Nick Desaulniers
2022-09-30 16:46                               ` Linus Torvalds
2022-09-30 18:55                                 ` [PATCH v5] " Nick Desaulniers
2022-09-30 10:14                           ` [PATCH v4] " David Laight

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).