From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754292AbbCMJsm (ORCPT ); Fri, 13 Mar 2015 05:48:42 -0400 Received: from cantor2.suse.de ([195.135.220.15]:36921 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753972AbbCMJsi (ORCPT ); Fri, 13 Mar 2015 05:48:38 -0400 Date: Fri, 13 Mar 2015 10:47:08 +0100 From: Borislav Petkov To: Oleg Nesterov Cc: Dave Hansen , Ingo Molnar , Andy Lutomirski , Linus Torvalds , Pekka Riikonen , Rik van Riel , Suresh Siddha , LKML , "Yu, Fenghua" , Quentin Casasnovas Subject: Re: [PATCH 1/4] x86/fpu: document user_fpu_begin() Message-ID: <20150313094708.GA31998@pd.tnic> References: <54F74F59.5070107@intel.com> <20150311173346.GB5032@redhat.com> <20150311173409.GC5032@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20150311173409.GC5032@redhat.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 11, 2015 at 06:34:09PM +0100, Oleg Nesterov wrote: > Currently user_fpu_begin() has a single caller and it is not clear that > why do we actually need it, and why we should not worry about preemption > right after preempt_enable(). > > Signed-off-by: Oleg Nesterov > --- > arch/x86/include/asm/fpu-internal.h | 4 +++- > 1 files changed, 3 insertions(+), 1 deletions(-) > > diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h > index 4bec98f..c615ae9 100644 > --- a/arch/x86/include/asm/fpu-internal.h > +++ b/arch/x86/include/asm/fpu-internal.h > @@ -464,7 +464,9 @@ static inline int restore_xstate_sig(void __user *buf, int ia32_frame) > * Need to be preemption-safe. > * > * NOTE! user_fpu_begin() must be used only immediately before restoring > - * it. This function does not do any save/restore on their own. > + * it. This function does not do any save/restore on its own. In a lazy > + * fpu mode this is just optimization to avoid a dna fault, the task can > + * lose FPU right after preempt_enable(). > */ I cleaned it up a bit more, if you don't mind: --- From: Oleg Nesterov Date: Wed, 11 Mar 2015 18:34:09 +0100 Subject: [PATCH] x86/fpu: Document user_fpu_begin() Currently, user_fpu_begin() has a single caller and it is not clear why do we actually need it and why we should not worry about preemption right after preempt_enable(). Signed-off-by: Oleg Nesterov Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Pekka Riikonen Cc: Rik van Riel Cc: Suresh Siddha Cc: Fenghua Yu Cc: Quentin Casasnovas Cc: Dave Hansen Cc: Ingo Molnar Link: http://lkml.kernel.org/r/20150311173409.GC5032@redhat.com Signed-off-by: --- arch/x86/include/asm/fpu-internal.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h index 810f20fd4e4e..e8ee3da3b924 100644 --- a/arch/x86/include/asm/fpu-internal.h +++ b/arch/x86/include/asm/fpu-internal.h @@ -508,10 +508,12 @@ static inline int restore_xstate_sig(void __user *buf, int ia32_frame) } /* - * Need to be preemption-safe. + * Needs to be preemption-safe. * * NOTE! user_fpu_begin() must be used only immediately before restoring - * it. This function does not do any save/restore on their own. + * the save state. It does not do any saving/restoring on its own. In + * lazy FPU mode, it is just an optimization to avoid a #NM exception, + * the task can lose the FPU right after preempt_enable(). */ static inline void user_fpu_begin(void) { -- -- Regards/Gruss, Boris. ECO tip #101: Trim your mails when you reply. --