From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65C4EC433E0 for ; Mon, 18 May 2020 18:53:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5175D207C4 for ; Mon, 18 May 2020 18:53:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729653AbgERSxf (ORCPT ); Mon, 18 May 2020 14:53:35 -0400 Received: from mga01.intel.com ([192.55.52.88]:30031 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728402AbgERSxe (ORCPT ); Mon, 18 May 2020 14:53:34 -0400 IronPort-SDR: BliWrYXfi0z5R2o92tB7QY0OXp/tMCDO8G2Ea33Axw5IgRTKtWKu1cD/5b46AoJHJG3ECAuhj9 xmhtxuheihIw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2020 11:53:34 -0700 IronPort-SDR: m15atzIGy7NKvCpHql5w3Fw+ShcNZDAebNuBtqPWNiQjKHcT9LEq8ZP7fbsQ7UtwGRPFyyOOEP Lk/bjLZMznFw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,407,1583222400"; d="scan'208";a="264054698" Received: from djiang5-desk3.ch.intel.com ([143.182.136.137]) by orsmga003.jf.intel.com with ESMTP; 18 May 2020 11:53:33 -0700 Subject: [PATCH v2 2/9] x86/asm: move the raw asm in iosubmit_cmds512() to special_insns.h From: Dave Jiang To: vkoul@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, bhelgaas@google.com, gregkh@linuxfoundation.org, arnd@arndb.de Cc: dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, dan.j.williams@intel.com, ashok.raj@intel.com, fenghua.yu@intel.com, tony.luck@intel.com, jing.lin@intel.com, sanjay.k.kumar@intel.com, dave.hansen@intel.com Date: Mon, 18 May 2020 11:53:33 -0700 Message-ID: <158982801306.37989.17977882738076524234.stgit@djiang5-desk3.ch.intel.com> In-Reply-To: <158982749959.37989.2096629611303670415.stgit@djiang5-desk3.ch.intel.com> References: <158982749959.37989.2096629611303670415.stgit@djiang5-desk3.ch.intel.com> User-Agent: StGit/unknown-version MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org The MOVDIR64B instruction can be used by other wrapper instructions. Move the core asm code to special_insns.h and have iosubmit_cmds512() call the core asm function. Signed-off-by: Dave Jiang Reviewed-by: Tony Luck --- arch/x86/include/asm/io.h | 15 ++------------- arch/x86/include/asm/special_insns.h | 17 +++++++++++++++++ 2 files changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index e1aa17a468a8..56421666115e 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -412,25 +412,14 @@ extern bool phys_mem_access_encrypted(unsigned long phys_addr, * Warning: Do not use this helper unless your driver has checked that the CPU * instruction is supported on the platform. */ -static inline void iosubmit_cmds512(void __iomem *__dst, const void *src, +static inline void iosubmit_cmds512(void __iomem *dst, const void *src, size_t count) { - /* - * Note that this isn't an "on-stack copy", just definition of "dst" - * as a pointer to 64-bytes of stuff that is going to be overwritten. - * In the MOVDIR64B case that may be needed as you can use the - * MOVDIR64B instruction to copy arbitrary memory around. This trick - * lets the compiler know how much gets clobbered. - */ - volatile struct { char _[64]; } *dst = __dst; const u8 *from = src; const u8 *end = from + count * 64; while (from < end) { - /* MOVDIR64B [rdx], rax */ - asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02" - : "=m" (dst) - : "d" (from), "a" (dst)); + movdir64b(dst, from); from += 64; } } diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 1b9b2e79c353..9ad28d82de70 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -256,6 +256,23 @@ static inline int write_user_shstk_64(unsigned long addr, unsigned long val) #define nop() asm volatile ("nop") +static inline void movdir64b(void *__dst, const void *src) +{ + /* + * Note that this isn't an "on-stack copy", just definition of "dst" + * as a pointer to 64-bytes of stuff that is going to be overwritten. + * In the MOVDIR64B case that may be needed as you can use the + * MOVDIR64B instruction to copy arbitrary memory around. This trick + * lets the compiler know how much gets clobbered. + */ + volatile struct { char _[64]; } *dst = __dst; + + /* MOVDIR64B [rdx], rax */ + asm volatile(".byte 0x66, 0x0f, 0x38, 0xf8, 0x02" + : "=m" (dst) + : "d" (src), "a" (dst)); +} + #endif /* __KERNEL__ */