From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-x243.google.com (mail-pf0-x243.google.com [IPv6:2607:f8b0:400e:c00::243]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 40wlTJ4mp4zDr5J for ; Wed, 30 May 2018 19:21:24 +1000 (AEST) Received: by mail-pf0-x243.google.com with SMTP id e9-v6so8720795pfi.4 for ; Wed, 30 May 2018 02:21:24 -0700 (PDT) From: wei.guo.simon@gmail.com To: linuxppc-dev@lists.ozlabs.org Cc: Paul Mackerras , Michael Ellerman , "Naveen N. Rao" , Cyril Bur , Simon Guo Subject: [PATCH v7 0/5] powerpc/64: memcmp() optimization Date: Wed, 30 May 2018 17:20:58 +0800 Message-Id: <1527672063-6953-1-git-send-email-wei.guo.simon@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Simon Guo There is some room to optimize memcmp() in powerpc 64 bits version for following 2 cases: (1) Even src/dst addresses are not aligned with 8 bytes at the beginning, memcmp() can align them and go with .Llong comparision mode without fallback to .Lshort comparision mode do compare buffer byte by byte. (2) VMX instructions can be used to speed up for large size comparision, currently the threshold is set for 4K bytes. Notes the VMX instructions will lead to VMX regs save/load penalty. This patch set includes a patch to add a 32 bytes pre-checking to minimize the penalty. It did the similar with glibc commit dec4a7105e (powerpc: Improve memcmp performance for POWER8). Thanks Cyril Bur's information. This patch set also updates memcmp selftest case to make it compiled and incorporate large size comparison case. v6 -> v7: - add vcmpequd/vcmpequdb .long macro - add CPU_FTR pair so that Power7 won't invoke Altivec instrs. - rework some instructions for higher performance or more readable. v5 -> v6: - correct some comments/commit messsage. - rename VMX_OPS_THRES to VMX_THRESH v4 -> v5: - Expand 32 bytes prechk to src/dst different offset case, and remove KSM specific label/comment. v3 -> v4: - Add 32 bytes pre-checking before using VMX instructions. v2 -> v3: - add optimization for src/dst with different offset against 8 bytes boundary. - renamed some label names. - reworked some comments from Cyril Bur, such as fill the pipeline, and use VMX when size == 4K. - fix a bug of enter/exit_vmx_ops pairness issue. And revised test case to test whether enter/exit_vmx_ops are paired. v1 -> v2: - update 8bytes unaligned bytes comparison method. - fix a VMX comparision bug. - enhanced the original memcmp() selftest. - add powerpc/64 to subject/commit message. Simon Guo (5): powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp() powerpc: add vcmpequd/vcmpequb ppc instruction macro powerpc/64: enhance memcmp() with VMX instruction for long bytes comparision powerpc/64: add 32 bytes prechecking before using VMX optimization on memcmp() powerpc:selftest update memcmp_64 selftest for VMX implementation arch/powerpc/include/asm/asm-prototypes.h | 4 +- arch/powerpc/include/asm/ppc-opcode.h | 11 + arch/powerpc/lib/copypage_power7.S | 4 +- arch/powerpc/lib/memcmp_64.S | 412 ++++++++++++++++++++- arch/powerpc/lib/memcpy_power7.S | 6 +- arch/powerpc/lib/vmx-helper.c | 4 +- .../selftests/powerpc/copyloops/asm/ppc_asm.h | 4 +- .../selftests/powerpc/stringloops/asm/ppc-opcode.h | 39 ++ .../selftests/powerpc/stringloops/asm/ppc_asm.h | 24 ++ .../testing/selftests/powerpc/stringloops/memcmp.c | 98 +++-- 10 files changed, 566 insertions(+), 40 deletions(-) create mode 100644 tools/testing/selftests/powerpc/stringloops/asm/ppc-opcode.h -- 1.8.3.1