From: Borislav Petkov <bp@alien8.de>
To: Tony Luck <tony.luck@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
x86@kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] x86/cpufeatures: Add support for fast short rep mov
Date: Tue, 7 Jan 2020 19:40:03 +0100 [thread overview]
Message-ID: <20200107184003.GK29542@zn.tnic> (raw)
In-Reply-To: <20191216214254.26492-1-tony.luck@intel.com>
On Mon, Dec 16, 2019 at 01:42:54PM -0800, Tony Luck wrote:
> From the Intel Optimization Reference Manual:
>
> 3.7.6.1 Fast Short REP MOVSB
> Beginning with processors based on Ice Lake Client microarchitecture,
> REP MOVSB performance of short operations is enhanced. The enhancement
> applies to string lengths between 1 and 128 bytes long. Support for
> fast-short REP MOVSB is enumerated by the CPUID feature flag: CPUID
> [EAX=7H, ECX=0H).EDX.FAST_SHORT_REP_MOVSB[bit 4] = 1. There is no change
> in the REP STOS performance.
>
> Add an X86_FEATURE_FSRM flag for this.
>
> memmove() avoids REP MOVSB for short (< 32 byte) copies. Fix it
> to check FSRM and use REP MOVSB for short copies on systems that
> support it.
>
> Signed-off-by: Tony Luck <tony.luck@intel.com>
>
> ---
>
> Time (cycles) for memmove() sizes 1..31 with neither source nor
> destination in cache.
>
> 1800 +-+-------+--------+---------+---------+---------+--------+-------+-+
> + + + + + + + +
> 1600 +-+ 'memmove-fsrm' *******-+
> | ###### 'memmove-orig' ####### |
> 1400 +-+ # ##################### +-+
> | # ############ |
> 1200 +-+# ################## +-+
> | # |
> 1000 +-+# +-+
> | # |
> | # |
> 800 +-# +-+
> | # |
> 600 +-*********************** +-+
> | ***************************** |
> 400 +-+ ******* +-+
> | |
> 200 +-+ +-+
> + + + + + + + +
> 0 +-+-------+--------+---------+---------+---------+--------+-------+-+
> 0 5 10 15 20 25 30 35
I don't mind this graph being part of the commit message - it shows
nicely the speedup even if with some microbenchmark. Or you're not
adding it just because it is a microbenchmark and not something more
representative?
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/lib/memmove_64.S | 6 +++---
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index e9b62498fe75..98c60fa31ced 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -357,6 +357,7 @@
> /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
> #define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
> #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
> +#define X86_FEATURE_FSRM (18*32+ 4) /* Fast Short Rep Mov */
> #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
> #define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */
> #define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
> diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
> index 337830d7a59c..4a23086806e6 100644
> --- a/arch/x86/lib/memmove_64.S
> +++ b/arch/x86/lib/memmove_64.S
> @@ -29,10 +29,7 @@
> SYM_FUNC_START_ALIAS(memmove)
> SYM_FUNC_START(__memmove)
>
> - /* Handle more 32 bytes in loop */
> mov %rdi, %rax
> - cmp $0x20, %rdx
> - jb 1f
>
> /* Decide forward/backward copy mode */
> cmp %rdi, %rsi
> @@ -43,6 +40,7 @@ SYM_FUNC_START(__memmove)
> jg 2f
>
> .Lmemmove_begin_forward:
> + ALTERNATIVE "cmp $0x20, %rdx; jb 1f", "", X86_FEATURE_FSRM
So the enhancement is for string lengths up to two cachelines. Why
are you limiting this to 32 bytes?
I know, the function handles 32-bytes at a time but what I'd imagine
here is having the fastest variant upfront which does REP; MOVSB for all
lengths since FSRM means fast short strings and ERMS - and I'm strongly
assuming here FSRM *implies* ERMS - means fast "longer" strings, so to
speak, so FSRM would mean fast *all length* strings in the end, no?
Also, does the copy direction influence the FSRM's REP; MOVSB variant's
performance? If not, you can do something like this:
SYM_FUNC_START_ALIAS(memmove)
SYM_FUNC_START(__memmove)
mov %rdi, %rax
/* FSRM handles all possible string lengths and directions optimally. */
ALTERNATIVE "", "movq %rdx, %rcx; rep movsb; retq", X86_FEATURE_FSRM
cmp $0x20, %rdx
jb 1f
...
Or?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
next prev parent reply other threads:[~2020-01-07 18:40 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-12 21:49 [PATCH] x86/cpufeatures: Add feature flag for fast short rep movsb Tony Luck
2019-12-12 22:52 ` Borislav Petkov
2019-12-16 21:42 ` [PATCH] x86/cpufeatures: Add support for fast short rep mov Tony Luck
2020-01-07 18:40 ` Borislav Petkov [this message]
2020-01-07 22:36 ` Luck, Tony
2020-01-08 10:30 ` Borislav Petkov
2020-01-08 10:38 ` [tip: x86/asm] x86/cpufeatures: Add support for fast short REP; MOVSB tip-bot2 for Tony Luck
2020-01-08 11:54 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200107184003.GK29542@zn.tnic \
--to=bp@alien8.de \
--cc=linux-kernel@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).