From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7F17C48BC2 for ; Wed, 23 Jun 2021 12:24:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CE2B761002 for ; Wed, 23 Jun 2021 12:24:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231585AbhFWM1A (ORCPT ); Wed, 23 Jun 2021 08:27:00 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:36694 "EHLO galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231315AbhFWM0Y (ORCPT ); Wed, 23 Jun 2021 08:26:24 -0400 Message-Id: <20210623121453.830239347@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1624451046; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=WhjFMUjM1WJAzo4n/deeXLk3u3NyNqr9i/GuZc1L+6k=; b=f55dZ/5nzWaTbk5QIJzjyrxJkGpx3co6oMCTknYczKzs9rzqNCXOpEbvchpXs0/lBDND4W SGzOf+SCzCCQnYzf6ZiOyMUtrFH8JNIFYYiji8U664epUd19lzq71CUSPtZzp9CRMifDns 7H7mvcQ0M2xd7Pd7POtHDNPng13L4CVsynAViZGNtBlOu3v+Y+DAG/76AFMqz/0hIjgIZE 54vuykIwRfmFRRoOWrhyaqxC+5St25iRkd0BqMsvm5JssFxoyHZwyj0OehgvxPVxJ4m0Gc r9zHIbIAmXEynz6Ox11pNk84a8nJyoeUu1yhGN5q7eEvm2yQFiLi982f/oSQSw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1624451046; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=WhjFMUjM1WJAzo4n/deeXLk3u3NyNqr9i/GuZc1L+6k=; b=I+JcalQfFppNA4i6zSCgqdh3PI9vHW+fu8EWh7pp1WtfNwCcVVFWw6UMr5xqMYtkSG2K9k RC3XqcpjjKBJUpDQ== Date: Wed, 23 Jun 2021 14:01:52 +0200 From: Thomas Gleixner To: LKML Cc: Andy Lutomirski , Dave Hansen , Fenghua Yu , Tony Luck , Yu-cheng Yu , Sebastian Andrzej Siewior , Borislav Petkov , Peter Zijlstra , Kan Liang , "Chang Seok Bae" , Megha Dey , Oliver Sang Subject: [patch V4 25/65] x86/fpu: Rename copy_xregs_to_kernel() and copy_kernel_to_xregs() References: <20210623120127.327154589@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-transfer-encoding: 8-bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The function names for xsave[s]/xrstor[s] operations are horribly named and a permanent source of confusion. Rename: copy_xregs_to_kernel() to os_xsave() copy_kernel_to_xregs() to os_xrstor() These are truly low level wrappers around the actual instructions XSAVE[OPT]/XRSTOR and XSAVES/XRSTORS with the twist that the selection based on the available CPU features happens with an alternative to avoid conditionals all over the place and to provide the best performance for hot paths. The os_ prefix tells that this is the OS selected mechanism. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov --- V3: Rename (Boris) --- arch/x86/include/asm/fpu/internal.h | 17 +++++++++++------ arch/x86/kernel/fpu/core.c | 7 +++---- arch/x86/kernel/fpu/signal.c | 21 +++++++++++---------- arch/x86/kernel/fpu/xstate.c | 2 +- 4 files changed, 26 insertions(+), 21 deletions(-) --- a/arch/x86/include/asm/fpu/internal.h +++ b/arch/x86/include/asm/fpu/internal.h @@ -263,7 +263,7 @@ static inline void fxsave_to_kernel(stru * This function is called only during boot time when x86 caps are not set * up and alternative can not be used yet. */ -static inline void copy_kernel_to_xregs_booting(struct xregs_state *xstate) +static inline void os_xrstor_booting(struct xregs_state *xstate) { u64 mask = -1; u32 lmask = mask; @@ -286,8 +286,11 @@ static inline void copy_kernel_to_xregs_ /* * Save processor xstate to xsave area. + * + * Uses either XSAVE or XSAVEOPT or XSAVES depending on the CPU features + * and command line options. The choice is permanent until the next reboot. */ -static inline void copy_xregs_to_kernel(struct xregs_state *xstate) +static inline void os_xsave(struct xregs_state *xstate) { u64 mask = xfeatures_mask_all; u32 lmask = mask; @@ -304,8 +307,10 @@ static inline void copy_xregs_to_kernel( /* * Restore processor xstate from xsave area. + * + * Uses XRSTORS when XSAVES is used, XRSTOR otherwise. */ -static inline void copy_kernel_to_xregs(struct xregs_state *xstate, u64 mask) +static inline void os_xrstor(struct xregs_state *xstate, u64 mask) { u32 lmask = mask; u32 hmask = mask >> 32; @@ -366,13 +371,13 @@ static inline int copy_user_to_xregs(str * Restore xstate from kernel space xsave area, return an error code instead of * an exception. */ -static inline int copy_kernel_to_xregs_err(struct xregs_state *xstate, u64 mask) +static inline int os_xrstor_safe(struct xregs_state *xstate, u64 mask) { u32 lmask = mask; u32 hmask = mask >> 32; int err; - if (static_cpu_has(X86_FEATURE_XSAVES)) + if (cpu_feature_enabled(X86_FEATURE_XSAVES)) XSTATE_OP(XRSTORS, xstate, lmask, hmask, err); else XSTATE_OP(XRSTOR, xstate, lmask, hmask, err); @@ -385,7 +390,7 @@ extern int copy_fpregs_to_fpstate(struct static inline void __copy_kernel_to_fpregs(union fpregs_state *fpstate, u64 mask) { if (use_xsave()) { - copy_kernel_to_xregs(&fpstate->xsave, mask); + os_xrstor(&fpstate->xsave, mask); } else { if (use_fxsr()) copy_kernel_to_fxregs(&fpstate->fxsave); --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -95,7 +95,7 @@ EXPORT_SYMBOL(irq_fpu_usable); int copy_fpregs_to_fpstate(struct fpu *fpu) { if (likely(use_xsave())) { - copy_xregs_to_kernel(&fpu->state.xsave); + os_xsave(&fpu->state.xsave); /* * AVX512 state is tracked here because its use is @@ -358,7 +358,7 @@ void fpu__drop(struct fpu *fpu) static inline void copy_init_fpstate_to_fpregs(u64 features_mask) { if (use_xsave()) - copy_kernel_to_xregs(&init_fpstate.xsave, features_mask); + os_xrstor(&init_fpstate.xsave, features_mask); else if (static_cpu_has(X86_FEATURE_FXSR)) copy_kernel_to_fxregs(&init_fpstate.fxsave); else @@ -389,8 +389,7 @@ static void fpu__clear(struct fpu *fpu, if (user_only) { if (!fpregs_state_valid(fpu, smp_processor_id()) && xfeatures_mask_supervisor()) - copy_kernel_to_xregs(&fpu->state.xsave, - xfeatures_mask_supervisor()); + os_xrstor(&fpu->state.xsave, xfeatures_mask_supervisor()); copy_init_fpstate_to_fpregs(xfeatures_mask_user()); } else { copy_init_fpstate_to_fpregs(xfeatures_mask_all); --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -261,14 +261,14 @@ static int copy_user_to_fpregs_zeroing(v r = copy_user_to_fxregs(buf); if (!r) - copy_kernel_to_xregs(&init_fpstate.xsave, init_bv); + os_xrstor(&init_fpstate.xsave, init_bv); return r; } else { init_bv = xfeatures_mask_user() & ~xbv; r = copy_user_to_xregs(buf, xbv); if (!r && unlikely(init_bv)) - copy_kernel_to_xregs(&init_fpstate.xsave, init_bv); + os_xrstor(&init_fpstate.xsave, init_bv); return r; } } else if (use_fxsr()) { @@ -356,9 +356,10 @@ static int __fpu__restore_sig(void __use * has been copied to the kernel one. */ if (test_thread_flag(TIF_NEED_FPU_LOAD) && - xfeatures_mask_supervisor()) - copy_kernel_to_xregs(&fpu->state.xsave, - xfeatures_mask_supervisor()); + xfeatures_mask_supervisor()) { + os_xrstor(&fpu->state.xsave, + xfeatures_mask_supervisor()); + } fpregs_mark_activate(); fpregs_unlock(); return 0; @@ -412,7 +413,7 @@ static int __fpu__restore_sig(void __use * above XRSTOR failed or ia32_fxstate is true. Shrug. */ if (xfeatures_mask_supervisor()) - copy_xregs_to_kernel(&fpu->state.xsave); + os_xsave(&fpu->state.xsave); set_thread_flag(TIF_NEED_FPU_LOAD); } __fpu_invalidate_fpregs_state(fpu); @@ -430,14 +431,14 @@ static int __fpu__restore_sig(void __use fpregs_lock(); if (unlikely(init_bv)) - copy_kernel_to_xregs(&init_fpstate.xsave, init_bv); + os_xrstor(&init_fpstate.xsave, init_bv); /* * Restore previously saved supervisor xstates along with * copied-in user xstates. */ - ret = copy_kernel_to_xregs_err(&fpu->state.xsave, - user_xfeatures | xfeatures_mask_supervisor()); + ret = os_xrstor_safe(&fpu->state.xsave, + user_xfeatures | xfeatures_mask_supervisor()); } else if (use_fxsr()) { ret = __copy_from_user(&fpu->state.fxsave, buf_fx, state_size); @@ -454,7 +455,7 @@ static int __fpu__restore_sig(void __use u64 init_bv; init_bv = xfeatures_mask_user() & ~XFEATURE_MASK_FPSSE; - copy_kernel_to_xregs(&init_fpstate.xsave, init_bv); + os_xrstor(&init_fpstate.xsave, init_bv); } ret = copy_kernel_to_fxregs_err(&fpu->state.fxsave); --- a/arch/x86/kernel/fpu/xstate.c +++ b/arch/x86/kernel/fpu/xstate.c @@ -400,7 +400,7 @@ static void __init setup_init_fpu_buf(vo /* * Init all the features state with header.xfeatures being 0x0 */ - copy_kernel_to_xregs_booting(&init_fpstate.xsave); + os_xrstor_booting(&init_fpstate.xsave); /* * All components are now in init state. Read the state back so